analytic methods in partial differential equations
TRANSCRIPT
Analytic Methods in Partial DifferentialEquations
A collection of notes and insights into the world
of differential equations.
Gregory T. von Nessi
October 31, 2011
Ver
sio
n::
31
10
20
11
ii Contents
Contents
Contents ii
Where I’m Coming From viii
I Classical Approaches 1
Chapter 1: Physical Motivations 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 A Note on Physical Assumptions . . . . . . . . . . . . . . . . . . . . 5
1.3 Classical Model for Traffic Flow . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 The Transport Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4.1 Method of Characteristics: A First Look . . . . . . . . . . . . . . . . 7
1.4.2 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Minimal Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5.1 Divergence Theorem and Integration by Parts . . . . . . . . . . . . . 10
1.6 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Classification of PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f 18
2.1 Laplace’s Eqn. in Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . 18
2.2 The Fundamental Solution of the Laplacian . . . . . . . . . . . . . . . . . . 21
2.3 Properties of Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.1 WMP for Laplace’s Equation . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Poisson Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.3 Strong Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 Estimates for Derivatives of Harmonic Functions . . . . . . . . . . . . . . . 32Gregory T. von Nessi
Versio
n::3
11
02
01
1
Contents iii
2.4.1 Mean-Value and Maximum Principles . . . . . . . . . . . . . . . . . 34
2.4.2 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.3 Estimates on Harmonic Functions . . . . . . . . . . . . . . . . . . . 39
2.4.4 Analyticity of Harmonic Functions . . . . . . . . . . . . . . . . . . . 42
2.4.5 Harnack’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5 Existence of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.1 Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.2 Green’s Functions by Reflection Methods . . . . . . . . . . . . . . . 51
2.5.2.1 Green’s Function for the Halfspace . . . . . . . . . . . . . 51
2.5.2.2 Green’s Function for Balls . . . . . . . . . . . . . . . . . . 55
2.5.3 Energy Methods for Poisson’s Equation . . . . . . . . . . . . . . . . 57
2.5.4 Perron’s Method of Subharmonic Functions . . . . . . . . . . . . . . 58
2.5.4.1 Boundary behavior . . . . . . . . . . . . . . . . . . . . . . 60
2.6 Solution of the Dirichlet Problem by the Perron Process . . . . . . . . . . . 62
2.6.1 Convergence Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6.2 Generalized Definitions & Harmonic Lifting . . . . . . . . . . . . . . 62
2.6.3 Perron’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 3: The Heat Equation 70
3.1 Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.2 The Non-Homogeneous Heat Equation . . . . . . . . . . . . . . . . . . . . 74
3.2.1 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3 Mean-Value Formula for the Heat Equation . . . . . . . . . . . . . . . . . . 76
3.4 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.4.1 Application of the Representation to Local Estimates . . . . . . . . . 84
3.5 Energy Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.6 Backward Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Chapter 4: The Wave Equation 88
4.1 D’Alembert’s Representation of Solution in 1D . . . . . . . . . . . . . . . . 88
4.1.1 Reflection Argument . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.2 Geometric Interpretation (h ≡ 0) . . . . . . . . . . . . . . . . . . . 92
4.2 Representations of Solutions for n = 2 and n = 3 . . . . . . . . . . . . . . . 93
4.2.1 Spherical Averages and the Euler-Poisson-Darboux Equation . . . . . 93
4.2.2 Kirchoff’s Formula in 3D . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2.3 Poisson’s Formula in 2D . . . . . . . . . . . . . . . . . . . . . . . . 95
4.2.4 Representation Formulas in 2D/3D . . . . . . . . . . . . . . . . . . 97
4.3 Solutions of the Non-Homogeneous Wave Equation . . . . . . . . . . . . . . 98
4.3.1 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.3.2 Solution in 3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.4 Energy methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.4.1 Finite Speed of Propagation . . . . . . . . . . . . . . . . . . . . . . 102
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
iv Contents
Chapter 5: Fully Non-Linear First Order PDEs 106
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.2 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.2.1 Strategy to Prove that the Method of Characteristics Gives a Solution
of F (Du, u, x) = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Chapter 6: Conservation Laws 124
6.1 The Rankine-Hugoniot Condition . . . . . . . . . . . . . . . . . . . . . . . . 124
6.1.1 Information about Discontinuous Solutions . . . . . . . . . . . . . . 128
6.2 Shocks, Rarefaction Waves and Entropy Conditions . . . . . . . . . . . . . . 129
6.2.1 Burger’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.3 Viscous Approximation of Shocks . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4 Geometric Solution of the Riemann Problem for General Fluxes . . . . . . . 138
6.5 A Uniqueness Theorem by Lax . . . . . . . . . . . . . . . . . . . . . . . . . 139
6.6 Entropy Function for Conservation Laws . . . . . . . . . . . . . . . . . . . . 141
6.7 Conservation Laws in nD and Kruzkov’s Stability Result . . . . . . . . . . . 143
6.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations 152
7.1 Classification of 2nd Order PDE in Two Variables . . . . . . . . . . . . . . . 153
7.2 Maximum Principles for Elliptic Equations of 2nd Order . . . . . . . . . . . . 158
7.2.1 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . 160
7.2.1.1 WMP for Purely 2nd Order Elliptic Operators . . . . . . . 161
7.2.1.2 WMP for General Elliptic Equations . . . . . . . . . . . . 163
7.2.2 Strong Maximum Principles . . . . . . . . . . . . . . . . . . . . . . 169
7.3 Maximum Principles for Parabolic Equations . . . . . . . . . . . . . . . . . . 172
7.3.1 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . 173
7.3.2 Strong Maximum Principles . . . . . . . . . . . . . . . . . . . . . . 174
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
II Functional Analytic Methods in PDE 181
Chapter 8: Facts from Functional Analysis and Measure Theory 182
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.2 Banach Spaces and Linear Operators . . . . . . . . . . . . . . . . . . . . . . 183
8.3 Fredholm Alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
8.4 Spectral Theorem for Compact Operators . . . . . . . . . . . . . . . . . . . 191
8.5 Dual Spaces and Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . 195
8.6 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
8.6.1 Fredholm Alternative in Hilbert Spaces . . . . . . . . . . . . . . . . 200
8.7 Weak Convergence and Compactness; the Banach-Alaoglu Theorem . . . . . 201
8.8 Key Words from Measure Theory . . . . . . . . . . . . . . . . . . . . . . . 204
8.8.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
8.8.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
8.8.3 Calderon-Zygmund Decomposition . . . . . . . . . . . . . . . . . . . 207
8.8.4 Distribution functions . . . . . . . . . . . . . . . . . . . . . . . . . . 209
8.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209Gregory T. von Nessi
Versio
n::3
11
02
01
1
Contents v
Chapter 9: Function Spaces 216
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
9.2 Holder Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
9.3 Lebesgue Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.3.1 Convolutions and Mollifiers . . . . . . . . . . . . . . . . . . . . . . . 221
9.3.2 The Dual Space of Lp . . . . . . . . . . . . . . . . . . . . . . . . . 224
9.3.3 Weak Convergence in Lp . . . . . . . . . . . . . . . . . . . . . . . . 225
9.4 Morrey and Companato Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.4.1 L∞ Companato Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.5 The Spaces of Bounded Mean Oscillation . . . . . . . . . . . . . . . . . . . 238
9.6 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
9.6.1 Calculus with Sobolev Functions . . . . . . . . . . . . . . . . . . . . 244
9.6.2 Boundary Values, Boundary Integrals, Traces . . . . . . . . . . . . . 247
9.6.3 Extension of Sobolev Functions . . . . . . . . . . . . . . . . . . . . 249
9.6.4 Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
9.6.5 Sobolev Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
9.6.6 The Dual Space of H10 = W 1,2
0 . . . . . . . . . . . . . . . . . . . . . 260
9.7 Sobolev Imbeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
9.7.1 Campanato Imbeddings . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.7.2 Compactness of Sobolev/Morrey Imbeddings . . . . . . . . . . . . . 266
9.7.3 General Sobolev Inequalities . . . . . . . . . . . . . . . . . . . . . . 270
9.8 Difference Quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.9 A Note about Banach Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Chapter 10: Linear Elliptic PDE 282
10.1 Existence of Weak Solutions for Laplace’s Equation . . . . . . . . . . . . . . 282
10.2 Existence for General Elliptic Equations of 2nd Order . . . . . . . . . . . . . 286
10.2.1 Application of the Fredholm Alternative . . . . . . . . . . . . . . . . 288
10.2.2 Spectrum of Elliptic Operators . . . . . . . . . . . . . . . . . . . . . 290
10.3 Elliptic Regularity I: Sobolev Estimates . . . . . . . . . . . . . . . . . . . . . 291
10.3.1 Caccioppoli Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . 292
10.3.2 Interior Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
10.3.3 Global Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
10.4 Elliptic Regularity II: Schauder Estimates . . . . . . . . . . . . . . . . . . . 303
10.4.1 Systems with Constant Coefficients . . . . . . . . . . . . . . . . . . 304
10.4.2 Systems with Variable Coefficients . . . . . . . . . . . . . . . . . . . 308
10.4.3 Interior Schauder Estimates for Elliptic Systems in Non-divergence
Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
10.4.4 Regularity Up to the Boundary: Dirichlet BVP . . . . . . . . . . . . 311
10.4.4.1 The nonvariational case . . . . . . . . . . . . . . . . . . . 315
10.4.5 Existence and Uniqueness of Elliptic Systems . . . . . . . . . . . . . 315
10.5 Estimates for Higher Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 318
10.6 Eigenfunction Expansions for Elliptic Operators . . . . . . . . . . . . . . . . 320
10.7 Applications to Parabolic Equations of 2nd Order . . . . . . . . . . . . . . . 324
10.7.1 Outline of Evan’s Approach . . . . . . . . . . . . . . . . . . . . . . 326
10.7.2 Evan’s Approach to Existence . . . . . . . . . . . . . . . . . . . . . 327
10.8 Neumann Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 330
10.8.1 Fredholm’s Alternative and Weak Solutions of the Neumann Problem 332
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
vi Contents
10.8.2 Fredholm Alternative Review . . . . . . . . . . . . . . . . . . . . . . 333
10.9 Semigroup Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
10.10Applications to 2nd Order Parabolic PDE . . . . . . . . . . . . . . . . . . . 342
10.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Chapter 11: Distributions and Fourier Transforms 348
11.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
11.1.1 Distributions on C∞ Functions . . . . . . . . . . . . . . . . . . . . . 348
11.1.2 Tempered Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 351
11.1.3 Distributional Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 351
11.2 Products and Convolutions of Distributions; Fundamental Solutions . . . . . 354
11.2.1 Convolutions of Distributions . . . . . . . . . . . . . . . . . . . . . . 355
11.2.2 Derivatives of Convolutions . . . . . . . . . . . . . . . . . . . . . . 356
11.2.3 Distributions as Fundemental Solutions . . . . . . . . . . . . . . . . 356
11.3 Fourier Transform of S(Rn) . . . . . . . . . . . . . . . . . . . . . . . . . . 357
11.4 Definition of Fourier Transform on L2(Rn) . . . . . . . . . . . . . . . . . . . 361
11.5 Applications of FTs: Solutions of PDEs . . . . . . . . . . . . . . . . . . . . 364
11.6 Fourier Transform of Tempered Distributions . . . . . . . . . . . . . . . . . 365
11.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Chapter 12: Variational Methods 368
12.1 Direct Method of Calculus of Variations . . . . . . . . . . . . . . . . . . . . 368
12.2 Integral Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
12.2.1 The Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . 377
12.2.2 Nonlinear Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . 380
12.3 Uniqueness of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
12.4 Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
12.5 Weak Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
12.6 A General Existence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 387
12.7 A Few Applications to Semilinear Equations . . . . . . . . . . . . . . . . . . 390
12.8 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
12.8.1 DeGiorgi-Nash Theorem . . . . . . . . . . . . . . . . . . . . . . . . 393
12.8.2 Second Derivative Estimates . . . . . . . . . . . . . . . . . . . . . . 396
12.8.3 Remarks on Higher Regularity . . . . . . . . . . . . . . . . . . . . . 399
12.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
III Modern Methods for Fully Nonlinear Elliptic PDE 401
Chapter 13: Non-Variational Methods in Nonlinear PDE 402
13.1 Viscosity Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
13.2 Alexandrov-Bakel’man Maximum Principle . . . . . . . . . . . . . . . . . . . 406
13.2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
13.2.1.1 Linear equations . . . . . . . . . . . . . . . . . . . . . . . 408
13.2.1.2 Monge-Ampere Equation . . . . . . . . . . . . . . . . . . 409
13.2.1.3 General linear equations . . . . . . . . . . . . . . . . . . . 410
13.2.1.4 Oblique boundary value problems . . . . . . . . . . . . . . 410
13.3 Local Estimates for Linear Equations . . . . . . . . . . . . . . . . . . . . . . 411
13.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
13.3.2 Local Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . 411Gregory T. von Nessi
Versio
n::3
11
02
01
1
Contents vii
13.3.3 Weak Harnack inequality . . . . . . . . . . . . . . . . . . . . . . . . 412
13.3.4 Holder Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
13.3.5 Boundary Gradient Estimates . . . . . . . . . . . . . . . . . . . . . 417
13.4 Holder Estimates for Second Derivatives . . . . . . . . . . . . . . . . . . . . 419
13.4.1 Interior Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
13.4.2 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
13.4.3 Interpolation inequalities . . . . . . . . . . . . . . . . . . . . . . . . 423
13.4.4 Global Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
13.4.5 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
13.5 Existence Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
13.5.1 Method of Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . 428
13.5.2 Uniformly Elliptic Case . . . . . . . . . . . . . . . . . . . . . . . . . 431
13.5.3 Monge-Ampere equation . . . . . . . . . . . . . . . . . . . . . . . . 433
13.5.4 Degenerate Monge-Ampere Equation . . . . . . . . . . . . . . . . . 436
13.5.5 Bondary Curvatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
13.5.6 General Boundary Values . . . . . . . . . . . . . . . . . . . . . . . . 438
13.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
13.6.1 Isoperimetric inequalities . . . . . . . . . . . . . . . . . . . . . . . . 438
13.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Appendices 443
Appendix A: Notation 444
Bibliography 446
Index 450
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
viii Where I’m Coming From
Where I’m Coming From
You may think me a freak or just completely insane, but I love partial differential equations
(PDE). It is important that I get this out quickly because I don’t want you to get all com-
mitted to reading this book and then start thinking I’ve just started going all weird on you
half-way through. I believe in being honest in relationships and writing a textbook on PDE
should be no different... right? So, if you are going to have issues with for me being...
excessively demonstrative... with this subject, it is probably best to put the book down now
and walk away... slowly.
When I was going through school, I was extremely fortunate to have all of my PDE
courses (I took many) taught by really awesome professors; and it’s probably fair to say that
this continues to influence my thoughts and feelings toward the subject to this very day.
My professors were not the type to drone out solution estimates or regurgitate whatever
mathematical progression was necessary to complete a proof. Instead, I was taught PDE
from various intuitive perspectives. It’s not to say that there was no rigour in my education
(there was quite a bit); but my teachers never let me lose sight of the bigger picture, be it
mathematical, physical or computational in nature.
This book embodies my own endeavour to share my enthusiasm and intuition that I have
gained in this area of maths with you. I am no stranger to the idea that PDE is generally
viewed as a terrifying subject, that only masochists or the clinically insane truly enjoy. While
I am going to refrain from commenting on my own mental stability (or lack thereof), I will
say that I believe that PDE (and mathematics in general) can be understood by anyone. It’s
not to say that I believe everyone should be a mathematician; but I think if someone likes
the subject, they can learn it. This is where I am coming from.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Where I’m Coming From ix
As much as I would like to present a 10,000 page self-contained treatise on the subject
of PDE, I am told that I will be dead before such a manuscript would get through peer
review. Given that I am writing this book for fame and money, having this book published
post-mortem is really not a viable option. So, I have decided to focus on the functional-
analytic foundation of PDE in this book; but this will require you to know some things about
real analysis and measure theory to suck out all of the information contained in these pages.
However, I still encourage anyone to at least flip through this book (or download/steal it...
Go ahead be a pirate! ARGHHH!) because my hope is that you will still gain some insight
into this really awesome and beautiful subject. Even if you have no understanding of maths,
I still encourage you to read the words and try to absorb the maths by some osmotic means.
Just keep in mind that no matter how scary the maths looks, the underlying concepts are
always within your grasp to understand. I believe in you!
If you get through this book, will you love PDE? That I can not honestly say. To me PDE
is like Opera (the music not the web browser), in the sense that (to rip off the sentiment
from Pretty Woman) some fall in love with Opera the first time they hear it, while others
can grow to appreciate it over time but never truly love it. Again, I believe you have all
the abilities to grasp every aspect of PDE, but whether or not you love or appreciate it is
intrinsic to your personality (I feel). My hope is that by reading this book that you are able
to figure out whether you are a lover of PDE or lean more toward appreciating the subject
(both are great). As for myself, PDE represents a new perspective by which I can see not
only abstract ideas but also every aspect of the universe that surrounds me. If you can at
least gain a glimpse of that perspective by reading this book, I will be happy.
As I said earlier, my enthusiasm for this subject has been cultivated by the teachings of
great professors from my past. I just want to thank Kiwi (James Graham-Eagle), Stephen
(Pennell), (Louis) Rossi, Georg (Dolzmann) and Neil (Trudinger) for robbing me of my sanity
and showing me that I am a lover of PDE.
Finally, I am initially distributing this work under an international Creative Commons
license that allows for the free distribution of these notes (but prevents commercial us-
ages and modifications). This license enables me to make these notes available, as I
continue the slow process of their revision. The license mark is at the footer of each
page, and you can find more information out by visiting the Creative Commons website
at http://creativecommons.org.
Gregory T. von Nessi Jr.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Part I
Classical Approaches
Ver
sio
n::
31
10
20
11
2Chapter 1: Physical Motivations
1 Physical Motivations
“. . . because it’s dull you TWIT; it’ll ’urt more!”
— Alan Rickman
Contents
1.1 Introduction 2
1.2 The Heat Equation 3
1.3 Classical Model for Traffic Flow 5
1.4 The Transport Equation 6
1.5 Minimal Surfaces 9
1.6 The Wave Equation 13
1.7 Classification of PDE 13
1.8 Exercises 16
1.1 Introduction
As the chapter title indicates, the following material is meant to simply acquaint the reader
with the concept of Partial Differential Equations (PDE). The chapter starts out with a few
simple physical derivations that motivate some of the most fundamental and famous PDE. In
some of these examples analytic techniques, such as Duhamel’s Principle, are introduced to
whet the reader’s appetite for the material forthcoming. After this initial motivation, basic
definitions and classifications of PDE are presented before moving on to the next chapter.Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.2 The Heat Equation3
1.2 The Heat Equation
One of the most, if not THE most, famous PDE is the Heat Equation. Obviously, this
equation is somehow connected with the physical concept of heat energy, as the name not-
so subtly suggests. The natural questions now are: what is the heat equation and where
does it come from? To answer these, we will first construct a physical model; but before we
proceed with this, let us define some notation.
B
Ω
In our model, take Ω to be a heat conducting body, ρ(x) as the mass density at x ∈ Ω,
c(x) is the specific heat at x ∈ Ω, and u(x, t) is the temperature at point x and time t. Also
take B to be an arbitrary ball in Ω.
In order to construct any physical model, one has to make some assumption about the
reality they are trying to replicate. In our model we assume that heat energy is conserved in
any subset of Ω; and in particular, is conserved in B.
With this in mind, we first consider the total heat energy in B, which is∫
B
ρ(x)c(x)u(x, t) dx ;
and the rate of change of the heat energy in B is
d
dt
∫
B
ρ(x)c(x)u(x, t) dx.
If we know the heat flux rate, ~F (x, t) (a vector-valued function), across a surface element
A centered at x with normal ~ν(x), then we know the flux across a surface is given by:
|A| · ~F (x, t) · ~ν(x).
Thus, using the assumption that heat energy is conserved in B, we calculate
d
dt
∫
B
ρ(x)c(x)u(x, t) dx = −∫
∂B
~F (x, t) · ~ν(x) dS
= −∫
B
div ~F (x, t) dx.
Upon rewriting this equality, we see that∫
B
(ρ(x)c(x)ut(x, t) + div ~F (x, t)︸ ︷︷ ︸turns out to be smooth
) dx = 0.
Since B is arbitrary, this last equation must be true for all balls B ⊂ Ω. We can now say,
ρ(x)c(x)ut(x, t) + div ~F (x, t) = 0. in Ω (1.1)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
4Chapter 1: Physical Motivations
We seek u, but we have only one equation that involves the unknown heat flux F ; in order
to proceed with our model, we must make another assumption based on what (we think) we
know of reality. So we make the following constitutive assumption as to what the heat flux
“looks like” mathematically:
~F (x, t) = −k(x) · ~Du(x, t).
(See Appendix A for notation). Plugging this constitutive assumption into (1.1), we get
Fourier’s Law of Cooling:
ρ(x)c(x)ut(x, t)− div (k(x) · ~Dxu(x, t)) = 0. (1.2)
To further simplify the problem, we consider the special case that ρ(x), c(x) and k(x)
are all constant; take them to be ρ1, c and k respectively. Now, we can further manipulate
the second term of the LHS of (1.2):
div Du = div
(∂
∂x1u, . . . ,
∂
∂xnu
)
=∂2
∂x21
u + . . .+∂2
∂x2n
u
=
(∂2
∂x21
+ . . .+∂2
∂x2n
)u
= ∆u (≡ ∇2u).
Thus we conclude that (1.2) can be written as
ρc · ut(x, t) = k · ∆uut(x, t) = K · ∆u, (1.3)
where K = kρc is the diffusion constant. (1.3) is the legendary Heat Equation in all its glory.
We shall see later that the solutions of the Heat Equation will evolve to a steady state
which does not change in time. These steady state solutions thus solve
∆u = 0. (1.4)
This is the most ubiquitous equation in all of physics. This is Laplace’s Equation.Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.3 Classical Model for Traffic Flow5
1.2.1 A Note on Physical Assumptions
[The reader may skip this section if they feel comfortable with the concepts of conservation
laws and constitutive assumptions.]
In our derivation of the Heat Equation, we used two physical observations to construct
a model of heat flow. Specifically, we first assumed that heat energy was conserved; this is
particular manifestation of what is known as a conservation law this was followed by a consti-
tutive assumption based on the actual observed behavior of heat. Considering conservation
laws, it is observed that many quantities are conserved nature; examples are conservation of
mass/energy, momentum, angular momentum etc. It turns out that these conservation laws
correspond to specific symmetries (via Noether’s Theorem). Physics, as we know it, is con-
structed off of the assumed validity of these conservation laws; and it is through these laws
that one finds that conclusions in physics manifest in the form of PDE. Some examples are
Schrodinger’s Equation, the Heat Equation (via the argument above), Einstein’s Equations,
and Maxwell’s Equation’s, to name a few.
In the above derivation, the use of the conservation of energy did not depend on the
fact that we were trying to model heat energy. Indeed, are model lost it’s generality (i.e.
pertaining to the general flow of energy) via the constitutive assumption, which is based on
the observations pertaining specifically to heat flow. In the above derivation, the constitutive
assumption was simply a mathematical way of saying that heat energy is observed to move
from a hot to a cold object, and that the rate of this flow will increase with the temperature
differential present. It is worth trying to understand this physically, through the example of
your own sense of “temperature”. To do this, one first should realize that when one feels
“temperature”, they are really feeling the heat energy flux on the surface of their skin. This
is why people with hypothermia cease to feel “cold”. In this situation their body’s internal
temperature has dropped below the normal 98.6F. As a result, the heat energy flux across
the surface of the skin has decreased as the temperature differential between the person’s
body and outside air has decreased.
Now, let us move on to another model to try and solidify the ideas of conservation laws
and constitutive assumptions.
1.3 Classical Model for Traffic Flow
We will now proceed to construct a model for (very) simple traffic flow in the same spirit as
the heat equation was derived; i.e. our model will be born out of a conservation law and a
constitutive assumption.
First, we define ρ(x, t) to represent the density of cars on an ideal high (one that is
straight, one lane with no ramps). Here, we take the highway to be represented by the
x-axis. Since we do not expect cars to be created or destroyed (hopefully) on our highway,
it is reasonable to assume that ρ(x, t) represents a conserved quantity. The corresponding
conservation law is written as
ρt(x, t) + div ~F (x, t) = 0.
Again, ~F represents the flux of car density at any given point in space-time:
~F (x, t) = ρ(x, t) · ~v(x, t),
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
6Chapter 1: Physical Motivations
where ~v(x, t) is the velocity of cars at a given point in space-time. With that, we rewrite
our conservation law as
ρt + (ρ · v)x = 0. (1.5)
Now it is time to bring in a constitutive assumption based on traffic flow observations. The
assumption we will use comes from the Lighthill-Whitham-Richards model of traffic flow,
and this corresponds to ~v(x, t) assumed to have the form
~v(x, t) = vmax
(1− ρ(x, t)
ρmax
)x ,
where vmax is the maximum car velocity (speed limit) and ρmax is the maximum density of
cars (parked bumper-to-bumper). Looking at this assumption, we see that this model rather
naively indicates that car velocity is directly proportional to car density; cars approach the
speed-limit as their density goes to zero and, conversely, slow to a stop as they get bumper-
to-bumper. Using this constitutive assumption with our conservation law (1.5) yields
ρt + vmax ·(ρ− ρ2
ρmax
)
x
= 0
=⇒ ρt + vmax ·(
1− 2ρ
ρmax
)ρx = 0. (1.6)
It turns out that this PDE predicts that a continuous density profile can develop a dis-
continuity in finite time, i.e. a car pile-up is an inevitability on an ideal highway!
1.4 The Transport Equation
Here we will again present another PDE via a physical derivation. In this model we seek
to understand the dispersion of an air-born pollutant in the presence of a constant and uni-
directional wind. For this, we take u(x, t) to be the pollutant density, and ~b to be the velocity
of the wind, which we are assuming to be constant in both space and time.
Notation: From this point forward, the vector notation over the symbols will be dropped, as
the context will indicate whether or not a quantity is a vector. With this in mind, we acknowl-
edge that x may indeed represent a spatial variable in any number of dimenions (although
for physical relevance, x ∈ R3; but this restriction is not necessary in our mathematical
derivation).
If we assume that the pollutant doesn’t react chemically with the surrounding air, then
we its total mass is conserved:
ut + div F (x, t) = 0,
where
F (x, t) = u(x, t)b.
Again, the form of the pollutant density flux is born out of a constitutive assuumption
regarding the dispersion of pollutant. It is easy to see that this assumption corresponds to
the pollutant simply being carried along by the wind. Combining the above conservation law
and constitutive assumption yields
ut +Du · b = 0,Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.4 The Transport Equation7
which is called the Transport Equation. Viewed in a space-time setting, this equation corre-
sponds to the directional derivative of u (as a function of x and t) being zero in the direction
(b, 1):
(Du, ut) · (b, 1)︸ ︷︷ ︸Directional derivative
= 0. (1.7)
Before moving on to another PDE, let us see if we can make any progress in terms of
solving (1.7).
1.4.1 Method of Characteristics: A First Look
Expect u = constant on the integral curves. The curve through (x, t): γ(s) = (x+sb, t+s)
z(s) = u(γ(s)) = u(x + sb, t + s)
z(s) = Du(x + sb, t + s) · b + ut(x + sb, t + s)
= 0
If we know the concentration of the pollutant at t = 0, i.e. we are solving the initial value
problem
ut +Du · b = 0
u(x, 0) = g(x)
then u(x, t) = u(γ(0)) by definition of γ
= u(γ(s)) ∀s. (because u is constant on curve γ(s))
Thus u(x, t) is the value of g at the intersection of the line γ(s) with the plane Rn×t = 0for s = −t.
u(x, t) = u(x − tb, 0) = g(x − tb)
Remarks: If you want to check that u(x, t) = g(x − t, b)
1. is a solution of ut +Du · b = 0, then we need that g is differentiable.
The formula also makes sense for g not differentiable, and u(x, t) = g(x − tb) would
be a so-called generalized or weak solution.
2. We were solving the PDE by solving the ODE
z(s) = 0
z(−t) = u(γ(−t)) = u(x − tb, 0)
= g(x − tb)
This is a general concept which is called the method of characteristics.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
8Chapter 1: Physical Motivations
1.4.2 Duhamel’s Principle
If you can solve the homogeneous equation ut +Du ·b = 0 then you can solve the inhomoge-
neous equation ut+Du ·b = f (x, t). f (x, t) is the source of the pollutant (e.g. power plant).
Idea: Suppose the pollutant were released only once per hour, then you could trace the
evolution of the pollutant by homogeneous equation, and you could add this evolution to the
evolution of the initial data. Equation we want to solveut +Du · b = f
u(x, 0) = g(x)
Since the sum of two solutions is a solution (since the equation is linear) we can write
u = v + w , wherevt +Dv · b = 0
v(x, 0) = g(x)
wt +Dw · b = f
w(x, 0) = 0
Check: ut +Du · b = (v + w)t +D(v + w) · b= (vt +Dv · b)︸ ︷︷ ︸
=0
+ (wt +Dw · b)︸ ︷︷ ︸f
= f
u(x, 0) = v(x, 0) + w(x, 0)
= g(x) + 0
= g(x)
Suspect: w(x, t) =
∫ t
0
w(x, t; s) ds
wherewt(x, t; s) +Dw(x, t; s) · b = 0 for x ∈ Rn, t > s
w(x, s; s) = f (x, s)
Solutions: v(x, t) = g(x − tb)
w(x, t; s) = f (x + (s − t)b, s)
Expectation: u(x, t) = g(x − tb) +
∫ t
0
f (x + (s − t)b, s) ds
Theorem 1.4.1 (): Let f and g be smooth. Then the solution of the initial value problem
ut +Du · b = f
u(x, 0) = g
is given by
u(x, t) = g(x − tb) +
∫ t
0
f (x + (s − t)b, s) ds
Proof: Differentiate explicitly. Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.5 Minimal Surfaces9
1.5 Minimal Surfaces
D = unit disk in the plane.
All surfaces that are graphs of functions on D with a given fixed boundary curve
u(x, t)
D
Question: Find a surface with minimal surface are A(u)
A(u) =
∫∫
D
√1 + |Du|2
Idea: Find a PDE that is a necessary condition for A(u) to be minimal. If you take any φ
which is zero on ∂D, then A(u + φ) ≥ A(u). Define
g(ε) = A(u + εφ) ≥ A(u)
∀ε ∈ R, and we have a minimum (as a function of ε!) if ε = 0. Thus g′(ε) = 0
0 ǫ
A(u + ǫφ)
d
dε
∣∣∣∣∣ε=0
g(ε) = 0 =d
dε
∣∣∣∣∣ε=0
∫∫
D
√1 + |D(u + εφ)|2
=1
2
∫∫
D
2Du ·Dφ+ 2ε|Dφ|2√1 + |Du + εDφ|2
dx
∣∣∣∣∣ε=0
=
∫∫
D
Du ·Dφ√1 + |Du|2
dx
= −∫∫
D
div
(Du√
1 + |Du|2
)φ dx
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
10Chapter 1: Physical Motivations
This holds ∀φ
div
(Du√
1 + |Du|2
)= 0 Minimal Surface Equation
1.5.1 Divergence Theorem and Integration by Parts
Theorem 1.5.1 (Divergence Theorem): ~F vector field domain Ω, smooth boundary ∂Ω =
S, outward unit normal ~ν(x)
∂Ω
~ν(x)
Ω
∫
Ω
div ~F dx =
∫
∂Ω
~F · ~ν dS Divergence Theorem
Example: Take ~F (x) = w(x)~v(x)
~v(x) = vector field
w(x) = scalar
LHS =
∫
Ω
div (w(x)~v(x)) dx
=
∫
Ω
n∑
i=1
∂i(w(x)vi(x)) dx
=
∫
Ω
n∑
i=1
[(∂iw(x))vi(x) + w(x)(∂ivi(x))] dx
=
∫
Ω
[Dw · ~v + w(x) · div v(x)] dx
RHS =
∫
∂Ω
(w(x)~v(x)) · ~ν(x) dS
=
∫
∂Ω
w(x)~v(x) · ~ν(x) dS
Thus, we have∫
Ω
Dw · ~v dx =
∫
∂Ω
w~v · ~ν dS −∫
Ω
w · div ~v dx (1.8)
Integration by parts
1 dimension:
∫ b
a
w ′v dx = wv
∣∣∣∣∣
b
a
−∫ b
a
wv ′ dx
Applications:Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.5 Minimal Surfaces11
1. ~v(x) = Du(x)
w(x) = φ(x)
∫
Ω
Dφ ·Du dx =
∫
∂Ω
φ Du · ~ν dS −∫
Ω
φ div Du dx
i.e.
∫
Ω
Dφ ·Du dx =
∫
∂Ω
φ ∂νu dS −∫
Ω
φ ∆u dx
2. Soap bubbles = minimal surfaces
u(x, t)
D
surface area
∫
D
√1 + |Du|2 dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
12Chapter 1: Physical Motivations
Minimize.
Assume that u is a minimizer. i.e. that A(u) ≤ A(v) for all functions v with the same
boundary values. Family of functions u + εφ where φ has zero boundary values. Look
at g(ε) = A(u + εφ)
g(ǫ)
ǫ
d
dε=
∫
Ω
d
dε
√1 + |Du + εDφ|2 dx
=
∫
Ω
d
dε
√1 + |Du|2 + 2εDu ·Dφ+ ε2|Dφ|2 dx
=1
2
∫
Ω
2Du ·Dφ+ 2ε|Dφ|2√1 + |Du|2 + 2εDu ·Dφ+ ε2|Dφ|2
dx
Now, since u is a minimum,
d
dε
∣∣∣∣∣ε=0
∫. . . dx = 0
=⇒∫
Ω
Du ·Dφ√1 + |Du|2
dx = 0
If we identify
w = φ ~v(x) =Du√
1 + |Du|2,
we can apply (1.8):∫
Ω
Dφ · Du√1 + |Du|2
dx =
∫
∂Ω
φDu√
1 + |Du|2· ~ν dS −
∫
Ω
φ div
(Du√
1 + |Du|2
)dx
= 0
The first term on the RHS is 0 since φ = 0 on ∂Ω. Thus,∫
Ω
φ div
(Du√
1 + |Du|2
)dx = 0
for all φ that are zero on ∂Ω. If u is smooth, then the integrand must be zero pointwise,
i.e.
div
(Du√
1 + |Du|2
)= 0 Minimal surface equation
Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.6 The Wave Equation13
1.6 The Wave Equation
[Ant80] Comments on ad hoc derivations (e.g. wave equation).
Outcome of this “incomprehensible” derivation is the wave equation
utt = c2uxx
Newton’s law f = ma.
1.7 Classification of PDE
1. Diffusion equation (2nd order):
ut = k∆u
2. Traffic flow (1st order):
ρt + vmax∂x
(ρ
(1− ρ
ρmax
))= 0
3. Transport equation (1st order):
ut +Du · b = 0
4. Minimal surfaces (2nd order):
div
(Du√
1 + |Du|2
)= 0
5. Wave equation (2nd order):
utt = c2uxx
Definition 1.7.1: A PDE is an expression of the form
F (Dku,Dk−1u, . . . , Du, u, x) = 0,
where Du = gradient of u
D2u = all 2nd order derivatives of u...
Dku = all kth order derivatives of u
k is called the order of the PDE.
Definition 1.7.2: A PDE is linear if it’s of the form
∑
|α|≤kaα(x) Dαu(x) = 0
where α is a multi-index.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
14Chapter 1: Physical Motivations
Notation: α = (α1, . . . , αn) with αi non-negative integers. We define
|α| =
n∑
i=1
αi
So,
Dα =∂α1
∂xα11
· ∂α2
∂xα22
· · · ∂αn
∂xαnn
Example:
• n = 2. α = (0, 0) corresponds to Dαu = u.
|α| = 1 is associated with α = (1, 0) or α = (0, 1).
α = (1, 0) : Dαu =∂u
∂x1
α = (0, 1) : Dαu =∂u
∂x2
|α| = 2 is associated with α = (2, 0), α = (1, 1) or α = (0, 2).
α = (2, 0) : Dαu =∂2u
∂x21
α = (1, 1) : Dαu =∂2u
∂x1∂x2
α = (0, 2) : Dαu =∂2u
∂x22
• n = 3. Just look at α = (1, 1, 2) with u = u(x1, x2, x3)
Dαu =∂u
∂x1
∂u
∂x2
∂2u
∂x23
u =∂4u
∂x1∂x2∂x23
u
Example: 2nd order linear equation in 2D]
∑
|α|≤2
aα(x) Dαu(x)
= a(2,0)(x)uxx + a(1,1)(x)ux,y + a(0,2)(x)uyy |α| = 2
+a(1,0)(x)ux + a(0,1)(x)uy |α| = 1
+a(0,0)(x)u |α| = 0
= f
Crucial term is the highest order derivative. Generalizations:
• a PDE is semilinear if it is linear in the highest order derivatives, i.e. it is of the form∑
|α|=kaα(x) Dαu(x) + a0(Dk−1u, . . . , Du, u, x)
• a PDE is quasilinear if it is of the form
∑
|α|=kaα(Dk−1u, . . . , Du, u, x) Dαu + a0(Dk−1u, . . . , Du, u, x)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
1.8 Exercises15
Example:
• diffusion equation:
ut = k∆u linear
• traffic flow:
ρt + vmax∂x
(ρ
(1− ρ
ρmax
))= 0 quasilinear
• transport equation:
ut +Du · b = 0 linear
• minimal surface equation:
difDu√
1 + |Du|2= 0
=⇒∑ ∂
∂xi
∂xiu√1 + |Du|2
= 0 quasilinear
• wave equation:
utt = c2uxx linear
1.8 Exercises
1.1: Classify each of the following PDEs as follows:
a) Is the PDE linear, semilinear, quasilinear or fully nonlinear?
b) What is the order of the PDE?
Here is the list of the PDEs:
i) Linear transport equation ut +∑ni=1 biDiu = 0.
ii) Schrodinger’s equation iut + ∆u = 0.
iii) Beam equation ut + uxxxx = 0.
iv) Eikonal equation |Du| = 1.
v) p-Laplacian div(|Du|p−2Du) = 0.
vi) Monge-Ampere equation det(D2u) = f .
1.2: Let Ω ⊂ R2 be an open and bounded domain representing a heat conducting plate with
density ρ(x, t) and specific heat c(x). Find an equation for the evolution of the temperature
u(x, t) at the point x at the time t under the assumption that there is a heat source f (x) in
Ω. Use Fourier’s law of cooling as the constitutive assumption and assume that all functions
are sufficiently smooth.
1.3:
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
16Chapter 1: Physical Motivations
a) Verify that the function u(x, t) = e−kt sin x satisfies the heat equation ut = kuxx on
−∞ < x <∞ and t > 0. Here k > 0 is the diffusion constant.
b) Verify that the function v(x, t) = sin(x − ct) satisfies the wave equation utt = c2uxxon −∞ < x <∞ and t > 0. Here c > 0 is the wave speed.
c) Sketch u and v . What is the most striking difference between the evolution of the
temperature u and the wave v?
1.4: Suppose that u is a smooth function that minimizes the Dirichlet integral
I(u) =1
2
∫
Ω
|Du|2 dx
subject to given boundary conditions u(x) = g(x) on ∂Ω. Show that u satisfies Laplace’s
equation ∆u = 0 in Ω.
1.5: [Evans 2.5 #1] Suppose g is smooth. Write down an explicit formula for the func-
tion u solving the initial value problemut + b ·Du + cu = 0 in Rn × (0,∞),
u = g on Rn × t = 0.
Here c ∈ R and b ∈ Rn are constants.
1.6: Show that the minimal surface equation
div
(Du√
1 + |Du|2
)= 0
can in two dimensions be written as
uxx(1 + u2y ) + uyy (1 + u2
x )− 2uxuyuxy = 0.
1.7: Suppose that u = u(x, y) = v(r) is a radially symmetric solution of the minimal surface
equation
uxx(1 + u2y ) + uyy (1 + u2
x )− 2uxuyuxy = 0.
Show that v satisfies
vr r +vrr
(1 + v2r ) = 0.
Use this result to show that u(x, y) = ρ log(r+√r2 − ρ2) is a solution of the minimal surface
equation for r =√x2 + y2 ≥ ρ > 0.
Hint: You can verify this by differentiating the given solution or by solving the ordinary
differential equation for v .
1.8: [Evans 2.5 #2] Prove that Laplace’s equation ∆u = 0 is rotation invariant; that is,
if R is an orthogonal n × n matrix and we define
v(x) = u(Rx), x ∈ Rn,
then ∆v = 0.Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
18Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
2Laplace’s Equation ∆u = 0 and Poisson’sEquation −∆u = f
“. . . You are too young to be burdened by something as cool as sadness.”
–? Kurosaki Isshin
Contents
2.1 Laplace’s Eqn. in Polar Coordinates 18
2.2 The Fundamental Solution of the Laplacian 21
2.3 Properties of Harmonic Functions 26
2.4 Estimates for Derivatives of Harmonic Functions 32
2.5 Existence of Solutions 47
2.6 Solution of the Dirichlet Problem by the Perron Process 62
2.7 Exercises 66
2.1 Laplace’s Equation in Polar Coordinates and for Radially
Symmetric Functions
We say that u is harmonic if ∆u = 0.
Example:
The real and imaginary part of holomorphic functions are harmonic
f (z) = z2 = (x + iy)2 = x2 − y2 + 2ixy
u(x, y) = x2 − y2
u(x, y) = 2xy
are harmonic
r =√x2 + y2
θ = tan−1(yx
)
u(x, y) = v(r, θ)Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.1 Laplace’s Eqn. in Polar Coordinates19
x
y
r
θ
∂x f = ∂x√x2 + y2 =
1
2
2x√x2 + y2
=x
r
∂xx r = ∂x
(xr
)=
1
r+ x∂x
(1
r
)=
1
r− x
2
r3
∂x1
r= ∂x(x2 + y2)−1/2 = −1
2
2x√x2 + y2
3 =−xr3
∂xy r = ∂y
(xr
)=−xyr3
In Summary:
∂xy =x
r, ∂y r =
y
r
∂xx r =1
r− x2r3 =
y2
r3, ∂yy r =
x2
r3
∂xy r =−xyr3
Now, we look at derivatives with respect to θ.
∂xθ = ∂x tan−1 y
x=
1
1 + y2
x2
− y
x2=
−yx2 + y2
=−yr2
∂yθ =x
r2
∂xxθ = ∂x
(− yr2
)= ∂x(−y(x2 + y2)−1) =
2xy
(x2 + y2)2
=2xy
r4
∂xyθ = ∂y
(− yr2
)= − 1
r2− y∂y (x2 + y2)−1 = − 1
r2+
2y2
(x2 + y2)2
= − 1
r2+
2y2
r4
∂yyθ = ∂y
( xr2
)= x∂y (x2 + y2)−1
= −2xy
r4
In Summary:
∂xθ = − yr2, ∂yθ =
x
r2
∂xxθ =2xy
r4, ∂yyθ = −2xy
r4
∂xyθ =y2 − x2
r4
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
20Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Now, we consider
u(x, y) = v(r, θ)
∂xu = vr rx + vθθx
∂xxu = vr r (rx)2 + 2vrθrxθx + vθθ(θy )2 + vr ryy + vθθyy
∂yu = vr ry + vθθy
∂yyu = vr r (ry )2 + 2vrθryθy + vθθ(θy )2 + vr ryy + vθθyy
∆u = uxx + uyy
= vr r (r2x + r2
y ) + 2vrθ(rxθx + ryθy ) + vθθ(θ2x + θ2
y )
+vr (rxx + ryy ) + vθ(θxx + θyy )
= 0
= vr r
(x2 + y2
r2
)+ vθθ
(x2 + y2
r4
)
+vr
(1
r− x
2
r3+
1
r− y
2
r3
)
Result: If u(x, y) = v(r, θ) and u solves Laplace’s equation, then v solves
vr r +1
rvr +
1
r2vθθ = 0
Laplace’s equation in polar coordinates
Now we consider Laplace’s equation for radially symmetric functions:
u(x) = v(r), r =
√x2
1 + . . .+ x2n
∂u
∂xi= vr (r)
∂r
∂xi= vr (r)
xir
∂2u
∂x2i
=∂
∂xi
(vr (r)
xir
)
= vr rx2i
r2+ vr
(1
r+ xi∂xi
1
r
)
∂xi1
r= ∂xi (x
21 + . . .+ x2
n )−1/2 = −1
2(x2
1 + . . .+ x2n )−3/22xi = − xi
r3
= vr rx2i
r2+ vr
(1
r− x
2i
r3
)
Thus,
∆u =
n∑
i=1
∂2
∂x2i
u
=
n∑
i=1
vr rx2i
r2+ vr
(1
r− x
2i
r3
)
= vr r +
(n
r− 1
r
)vr
Result: If u(x) = v(r) is a radially symmetric solution of Laplace’s equation then
vr r +n − 1
rvr = 0 (2.1)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.2 The Fundamental Solution of the Laplacian21
2.2 The Fundamental Solution of the Laplacian
Special solution radially symmetric by solving (1) of the previous section. Assuming vr 6= 0
vr rvr
=1− nr
integrate in r
=⇒ ln vr = (1− n) ln r + C1
=⇒ vr = C2 · r1−n
Integrate again:
n = 2 : vr =C2
r=⇒ v = C2 · ln r + C3
n ≥ 3 : vr =C2
rn−1=⇒ v =
−1
n − 2
C2
rn−2+ C3
Find special solutions
u(x) = v(r) =
C2 · ln |x |+ C3 n = 2
− C2n−2
1|x |n−2 + C3 n ≥ 3
The fundamental solution is of this form with particular constants.
Definition 2.2.1:
Φ(x) =
− 1
2π · ln |x | n = 2
− 1n(n−2)α(n)
1|x |n−2 n ≥ 3
for x 6= 0, and α(n) is the volume of the unit ball in Rn.
Note: ∆Φ = 0 for x 6= 0 but not defined if x = 0. ∆Φ can be defined everywhere in the
sense of distributions,
−∆Φ = δ0 = “Dirac Mass”
placed at x = 0,
δ0(x) =
1 x = 0
0 x 6= 0
−∆Φ(y − x) =
0 x 6= y
1 x = y
Formally, this suggests a solution formula for Poisson’s equation,
−∆u = f in Rn
namely,
u(x) =
∫
RnΦ(x − y)f (y) dy
Why?
“− ∆u(x) =
∫
Rn−∆xΦ(x − y)f (y) dy
=
∫
Rnδx(y)f (y) dy = f (x)“
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
22Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Lemma 2.2.2 (): We have
|DΦ(x)| ≤ C
|x |n−1x 6= 0
|D2Φ(x)| ≤ C
|x |n x 6= 0
and∂Φ
∂ν(x) =
−1
nα(n)
1
rn−1∀x ∈ ∂B(0, r)
∂φ∂ν(x) = Dφ(x) · x|x |
ν(x) = x|x |
x
r
x
O
Proof:
n = 2 :∂
∂xiln |x | =
1
|x |∂
∂xi|x | =
1
|x |xi|x |
DΦ(x) =−2
2π
x
|x |2 , |DΦ| ≤ C 1
|x |√
∂
∂xj
∂
∂xiln |x | =
∂
∂xj
xi|x |2 =
δi j|x |2 + xi
∂
∂xj
1
|x |2
∂
∂xj
1
|x |2 =∂
∂xj
(∑x2k
)−1
= −(∑
x2k
)−22xj = − 2xj
|x |4
∂2Φ
∂xi∂xj= − 1
2π
(δi j|x |2 −
2xixj|x |4
)
∣∣∣∣∂2Φ
∂xi∂xj
∣∣∣∣ ≤ − 1
2π
(1
|x |2 −2|xi ||xj ||x |4
)
≤ 1
2π
3
|x |2
DΦ(x) = − 1
2πD ln |x | = − 1
2π
1
|x |x
|x |∂Φ
∂ν(x) = − 1
2π
x
|x |2x
|x |
= − 1
2π
|x |2|x |2 = − 1
2π
1
|x |Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.2 The Fundamental Solution of the Laplacian23
n ≥ 3 :∂
∂xi
1
|x |n−2=
∂
∂xi
(∑x2k
)− n−22
= −n − 2
2
(∑x2k
)−( n−22
+1)2xi
= −(n − 2)xi|x |n
DΦ(x) =−1
nα(n)
x
|x |n
|DΦ(x)| ≤ 1
nα(n)
|x ||x |n =
1
nα(n)
1
|x |n−1
Second derivatives are similar
∂Φ
∂ν(x) =
−1
nα(n)
x
|x |nx
|x |
=−1
nα(n)
|x |2|x |n+1
Remark: |D2Φ| ≤ C|x |n
i.e. D2Φ are not integrable about zero.
Theorem 2.2.3 (): Solution of Poisson’s equation in Rn. Suppose that f ∈ C2 and that f
has compact support, i.e. closure of (x ∈ Rn : f (x) 6= 0) ⊆ B(0, s) for some s > 0. Then
u(x) =
∫
RnΦ(x − y)f (y) dy
is twice differentiable and solves Poisson’s equations −∆u = f in Rn.
Proof: Change variables: z = x − y1. y = x − z and dy = dz , i.e.,
∫
RnΦ(x − y)f (y) dy =
∫
RnΦ(z)f (x − z) dz
=
∫
RnΦ(y)f (x − y) dy = u(x)
2. u is differentiable
limh→0
u(x + hei)− u(x)
h= limh→0
∫
RnΦ(y)
f (x + hei − y)− f (x − y)
hdy
Technical Step: If we can interchange limh→0 and integration, then (f ∈ C2):
∂u
∂xi=
∫
RnΦ(y)
∂f
∂xi(x − y) dy
and by the same arguments,
∂2u
∂xi∂xj=
∫
RnΦ(y)
∂2f
∂xi∂xj(x − y) dy
and ∆u =
∫
RnΦ(y)∆x f (x − y) dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
24Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
So, what was the “black box” that enables use to switch the integration and limit?
Black Box: Lebesgue’s Dominated Convergence Theorem: fk be a sequence of integrable
functions such that |fk | ≤ g almost everywhere, g integrable fk → f pointwise almost every-
where. then
limk→∞
∫
Ω
fk =
∫
Ω
limk→∞
fk =
∫
Ω
f
Our application: Need
∣∣∣∣Φ(y)f (x + hei − y)− f (x − y)
h
∣∣∣∣ ≤ g
g(y) = |Φ(y)|maxz∈Rn
∣∣∣∣∂f
∂xi(z)
∣∣∣∣︸ ︷︷ ︸
≤M
is integrable since f and hence ∂f∂xi
have compact support.
∆u =
∫
RnΦ(y)∆x f (x − y) dy
=
∫
RnΦ(y)∆y f (x − y) dy
=
∫
Rn\B(0,ε)
Φ(y)∆y f (x − y) dy +
∫
B(0,ε)
Φ(y)∆y f (x − y) dy
= I1 + I2
Now, we have
|I2| ≤ maxRn|D2f |
∫
B(0,ε)
|Φ(y)| dy
n = 2 :
∫
B(0,ε)
ln |x | dx =
∫ 2π
0
∫ ε
0
ln r · rdrdθ
= 2π
∫ ε
0
r ln r dr
=
∫ ε
0
(1
2r ′)′
ln r dr
=1
2r2 ln r
∣∣∣∣∣
ε
0
−∫ ε
0
1
2r2 1
rdr
=1
2ε2 ln ε− 1
4r2
∣∣∣∣∣
ε
0
=1
2ε ln ε− 1
4ε2
∴ |I2| is O(ε ln ε)Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.2 The Fundamental Solution of the Laplacian25
n = 3 :
∫
B(0,ε)
1
|x |n−2dx =
∫ ε
0
∫
∂B(0,ε)
1
|r |n−2dSdr
=
∫ ε
0
1
rn−2Crn−1 dr
= C
∫ ε
0
r dr
=C
2ε2
∴ |I2| ≤ maxRn|D2f |
∫
B(0,ε)
|Φ(y)| dy
=
O(ε ln ε) n = 2
O(ε2) n ≥ 3
In particular: I2 → 0 as ε→ 0
I1 =
∫
Rn\B(0,ε)
Φ(y)∆y f (x − y) dy
=
∫
∂B(0,ε)
Φ(y)Df (x − y) · ν dS −∫
Rn\B(0,ε)
DΦ(y)Df (x − y) dy
= J1 + J2
Now,
|J1| =
∣∣∣∣∫
∂B(0,ε)
Φ(y)Df (x − y) · ν dS∣∣∣∣
≤ max |Df |∫
∂B(0,ε)
Φ(y) dS(y)
n = 2 : ε ln ε→ 0 as ε→ 0
n ≥ 3 :C
εn−2εn−1 = ε→ 0 as ε→ 0
So we are left to look at
J2 = −∫
Rn\B(0,ε)
DΦ(y)Df (x − y) dy
= −∫
∂B(0,ε)
DΦ(y) · νin(y)f (x − y) dS +
∫
Rn\B(0,ε)
∆Φ(y)︸ ︷︷ ︸=0
f (x − y) dy
=
∫
∂B(0,ε)
DΦ(y) · νout(y)f (x − y) dS
=
∫
∂B(0,ε)
−1
|∂B(0, ε)| f (x − y) dS
= −∫
∂B(0,ε)
−f (x − y) dS(y)→ −f (x) as ε→ 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
26Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Notation: −∫u f = |u|−1
∫u f , where |u| is the measure of u.
Finally,
∆u(x) =
∫
RnΦ(y)∆f (x − y) dy
= terms tending to zero as ε→ 0
−−∫
B(0,ε)
f (x − y) dS → −f (x) as ε→ 0
=⇒ −∆u(s) = f (x)
2.3 Properties of Harmonic Functions
2.3.1 WMP for Laplace’s Equation
To begin this section, we introduce the Laplacian differential operator:
∆u =
n∑
i=1
∂2u
∂x2i
, u ∈ C2(Ω).
Note that ∆u is sometimes written as div ·∇u in classical notation. Coupled to this we have
the following definition
Definition 2.3.1: u ∈ C2(Ω) is called harmonic in Ω if and only if ∆u = 0 in Ω.
Example 1: u(x1, x2) = x21 − x2
2 is a nonlinear harmonic function in R2.
In addition to the above, we have the next definition.
Definition 2.3.2: u ∈ C2(Ω) is called subharmonic( superharmonic) if and only if ∆u ≥ 0(≤0) in Ω.
Remark: It is obvious, due to the linearity of the Laplacian, that if u is subharmonic, then
−u is superharmonic and vice-versa.
Example 2: If x ∈ Rn, then ∆|x |2 = 2n. Thus, |x |2 is subharmonic in Rn.
Now, we come to our first theorem
Theorem 2.3.3 (Weak Maximum Principle for the Laplacian): Let u ∈ C2(Ω) ∩ C0(Ω)
with Ω bounded in Rn. If ∆u ≥ 0 in Ω, then
u ≤ max∂Ω
u
(⇐⇒ max
Ωu = max
∂Ωu
). (2.2)
Conversely, if ∆u ≤ 0 in Ω, then
u ≥ min∂Ω
u
(⇐⇒ min
Ωu = min
∂Ωu
). (2.3)
Consequently, if ∆u = 0 in Ω, then
min∂Ω
u ≤ u ≤ max∂Ω
u. (2.4)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.3 Properties of Harmonic Functions27
Proof: We will just prove the subharmonic result, as the superharmonic case is proved the
same way. Given u subharmonic, take ε > 0 and define v = u + ε|x |2, so that ∆v =
∆u + 2εn > 0 in Ω. If v takes attains a maximum in Ω, we have ∆v ≤ 0 at that point, a
contradiction. Thus, by the compactness of Ω, we have
v ≤ max∂Ω
v .
Finally, we take ε→ 0 in the above to get the result.
Corollary 2.3.4 (Uniqueness): Take u, v ∈ C2(Ω) ∩ C0(Ω). If ∆u = ∆v = 0 in Ω and
u = v on ∂Ω, then u = v in Ω.
Before moving onto more general elliptic equations, we briefly remind ourselves of the Dirich-
let Problem, which takes the following form.
∆u = 0 in Ω
u = φ on ∂Ω for some φ ∈ C0(∂Ω)
2.3.2 Poisson Integral Formula
The best possible scenario one can hope for when dealing in theoretical PDE is to actually
have an analytic solution to the equation. Obviously, it is easier to analyze a solution directly
rather than having to do so through its properties. Fortunately, this turns out to be possible
for the Dirichlet problem for Laplace’s equation in a ball.
To start off, let us consider some other examples of harmonic functions:
• Since Laplace’s equation is a constant coefficient equation, we know that if u ∈ C3(Ω)
is harmonic, then so is all its partial derivatives. Indeed,
Di(∆u) = 0 ⇐⇒ ∆(Diu) = 0.
• There are also radially symmetric solutions to Laplace’s equation. To derive these, let
r = |x | and u(x) = g(r). We calculate
Diu = g′ · xi|x | =g′
r· xi
∆u = Di iu = n · g′
r+g′′
r2· x2i −
g′
r3x2i
= n · g′
r+ g′′ − g
′
r
= (n − 1) · g′
r+ g′′
=(rn−1g′)′
rn−1= 0.
This implies that rn−1g′(r) = constant; integration then yields
g =
C1r
2−n + C2 n > 2
C1 ln r + C2 n = 2.
These are all the radial solution of the homogeneous Laplace’s equation outside of the
origin. For simplicity we will take C1 = 1 and C2 = 0 in the above.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
28Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
• It is easily verified that translating the above solutions so that y is taken to be the new
origin, are also harmonic functions:
|x − y |2−n n > 2
C1 ln |x − y | n = 2.
Since these functions are symmetric in x and y , it’s obvious the above are harmonic
with respect to either x or y provided x 6= y . Taking a derivative of the above indicates
that
xi − yi|xi − yi |n
is also harmonic. Next consider
V (x, y) =|y |2 − |x |2|x − y |n =
|y |2 + |x |2 − 2xy
|x − y |n − 2x(x − y)
|x − y |n
=1
|x − y |n−2− 2xi(xi − yi)|x − y |n .
The last term is harmonic since it is the derivative with respect to yi of |x − y |2−n.
Thus, v(x, y) is harmonic in both x and y variables.
It would be difficult to verify the above aforementioned functions solved Laplace’s equation,
but they are readily constructed from simpler solutions.
Now, let us consider |y | = R and define
v(x) =
∫
|y |=RV (x, y) dS(y).
First, it is clear that v(x) is harmonic for x ∈ BR(0). Indeed for any fixed x in the BR(0),
V (x, y) is bounded. So we may interchange the Laplacian with the integral to get that v(x)
is harmonic in BR(0). Next, we claim that v(x) is radial, i.e. only depends on |x |. To show
this we consider an arbitrary rotation transformation on v(x). Sometimes this is called an
orthonormal transformation as it corresponds to the transformation of x = Px ′, where P
is an orthonormal matrix (rows/columns are orthogonal basis in Rn, P T = P−1 is another
property). So, given the transformation, we have
v(Px) =
∫
|y |=R
|y |2 − |Px |2|Px − y |n dS(y)
=
∫
|y |=R
R2 − |Px |2|Px − y |n dS(y).
Now, we change variables with Pz = y :
v(Px) =
∫
|Pz |=R
R2 − |Px |2|Px − Pz |n dS(Pz)
=
∫
|z |=R
R2 − |x |2|P (x − z)|n det(P ) dS(z)
=
∫
|z |=R
R2 − |x |2|x − z |n dS(z).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.3 Properties of Harmonic Functions29
In the above, we have used the fact that rotations obviously do not change vector mag-
nitude. Also, we know det(P ) = 1 as 1 = det(I) = det(PP−1) = det(P ) det(P−1) =
det(P ) det(P T ) = det(P )2. So, v(x) is rotationally invariant. Thus, we calculate
v(0) =
∫
|y |=RR2−n dS(y) = R2−nA(SR(0)) = nωnR.
It turns out the above result is valid for any x ∈ BR(0), this can be verified by carrying out
the integration; but this is rather complicated.
Now, we can make the following definition
Definition 2.3.5: The Poisson Kernel:
K(x, y) =R2 − |x |2nωnR
1
|x − y |n ,
where |y | = R.
From the above, we already know that K(x, y) is harmonic in BR(0) with respect to x . From
the calculation for v(x), we also have
∫
|y |=RK(x, y) dS(y) = 1
by construction. Next, we have
Definition 2.3.6: The Poisson Integral for x ∈ BR(0) is
w(x) =
∫
|y |=RK(x, y)φ(y) dS(y).
Theorem 2.3.7 (): (Properties of the Poisson Integral) Given the above definition, w ∈C∞(BR(0)) ∩ C0(BR(0) with ∆w = 0 in BR(0). Also w(x)→ φ(y) as x → y ∈ ∂BR(0).
Remark: Basically one has that w solves the Dirichlet problem for Laplace’s equation:
∆w = 0 in BR(0)
w = φ on ∂BR(0)
Proof: w ∈ C∞(BR(0)) is obvious since for fixed x ∈ BR(0), K(x, y) is bounded and
harmonic. Hence on may take derivatives of any order inside the integral (the derivatives will
also be well behaved as x 6= y for our fixed x). So, all we need to show is that w(x)→ φ(y0)
as x → y0 ∈ ∂BR(0). Take an arbitrary ε > 0 and choose δ such that |φ(y) − φ(y0)| < ε
implies |y − y0| < δ. Keeping in mind the figure, we calculate:
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
30Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
O
δ2
δ2
x
y0
δBR(0) ∩ |y − y0| > δ
|w(x)− φ(y0)| =
∣∣∣∣∫
∂BR(0)
K(x, y)(φ(y)− φ(y0)) dS(y)
∣∣∣∣
≤∫
∂BR(0)∩|y−y0|<δ|K(x, y)(φ(y)− φ(y0))| dS(y)
+
∫
∂BR(0)∩|y−y0|>δ|K(x, y)(φ(y)− φ(y0))| dS(y)
≤ ε
∫
∂BR(0)∩|y−y0|<δ|K(x, y)| dS(y)
+2 · max∂BR(0)
φ ·∫
∂BR(0)∩|y−y0|>δ
R2 − |x |2|x − y |n dS(y).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.3 Properties of Harmonic Functions31
Now, if one chooses x such that |x − y0| ≤ δ2 , this implies that |x − y | ≥ δ
2 for y ∈∂BR(0) ∩ |y − y0| > δ. Thus, we have
|w(x)− φ(y0)| ≤ ε+ 2 · max∂BR(0)
φ · R2 − |x |2δ2
n
≤ 2ε,
where we have chosen x so that ·max∂BR(0) φ · R2−|x |2δ2
n < ε2 . The last inequality comes from
simply picking x close enough to y0 (as this happens R2 − |x |2 clearly shrinks). So, we have
now shown that w(x) → φ(y0) as x → y0 ∈ ∂BR(0). This also implies the continuity of
w(x) on the closure of Ω.
A very important consequence to the last theorem is that if u ∈ C2(Ω) is harmonic, then for
any ball BR(z), one has
u(x) =R2 − |x − z |2
nωnR
∫
∂BR(z)
u(y)
|x − y |n dS(y).
In other words, the value of a harmonic function at any point is completely determined by
the values it takes on any given spherical shell surrounding that point! Taking, x = z in the
above, we get the following:
Theorem 2.3.8 (): (Mean Value Property) If u ∈ C2(Ω) and is harmonic on Ω, then one
has
u(z) =1
nωnRn−1
∫
∂BR(z)
u(y) dS(y)
=1
A(∂BR(z))
∫
∂BR(z)
u(y) dS(y), (2.5)
for any z ∈ Ω, BR(z) ⊂ Ω.
In addition, one has
Corollary 2.3.9 (): If u ∈ C2(Ω) and is subharmonic(superharmonic) on Ω, then one has
u(z) ≤ (≥)1
A(∂BR(z))
∫
∂BR(z)
u(y) dS(y), (2.6)
for any z ∈ Ω, BR(z) ⊂ Ω.
Proof: This is a simple matter of solving the following Dirichlet Problem:
∆v = 0 in BR(z)
v = u on ∂BR(z)
First, we know that ∆(u − v) ≥ (≤) 0. Thus, the weak maximum principle states that
u ≤ (≥) v in ∂BR(z). So, putting everything together, one has
u ≤ (≥) v =1
A(∂BR(z))
∫
∂BR(z)
v(y) dS(y)
=1
A(∂BR(z))
∫
∂BR(z)
u(y) dS(y).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
32Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
2.3.3 Strong Maximum Principle
Theorem 2.3.10 (Strong Maximum Principle): If u ∈ C2(Ω) with Ω connected and ∆u ≥(≤) 0, then u can not take on interior maximum(minimum) in Ω, unless u is constant.
Proof: Suppose u(y) = M := maxΩ u. Clearly, one has
∆(M − u) ≤ (≥) 0.
Thus, by the corollary to the mean value property,
(M − u)(y) ≥ (≤)1
nωnR
∫
∂BR(y)
M − u(x) dS(x)
for any BR(y) ⊂ Ω. This implies that u = M on BR(y), which in turn implies u = M on Ω
as Ω is connected.
Problem: Take Ω ⊂ Rn bounded and u ∈ C2(Ω) ∩ C0(Ω). If
∆u = u3 − u in Ω
u = 0 on ∂Ω
Prove −1 ≤ u ≤ 1. Can u take either value ±1?
2.4 Estimates for Derivatives of Harmonic Functions
Now, we will go through estimating the derivatives of harmonic functions. First, note that
the mean value property immediately implies that
u(x) =1
ωnRn
∫
BR(x)
u(y) dy
for any BR(x) ⊂ Ω and u harmonic. This can simply be ascertained from preforming a radial
integration on the statement of the mean value property. As we have shown derivatives to
be harmonic, we have that
Diu(x) =1
ωnRn
∫
BR(x)
div u,i dy,
where u,i is a vector whose ith place is u and whose other components are zero. So applying
the divergence theorem to the above, we have
Diu(x) =1
ωnRn
∫
∂BR(x)
u,i · ν dS(y)
≤ 1
ωnRnsupBR(x)
|u| ·∫
∂BR(x)
dS(y)
=n
Rsup
∂BR(x)|u|.
From this we see that any derivative is bounded by the final term in the above, thus
|Du(x)| ≤ n
RsupBR(x)
|u|.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions33
R|β|
R|β|
R|β|
R|β|
Applying this result iteratively over concentric balls whose radii increase by R/|β| for each
iteration, yields the following (refer to the figure):
|Dβu(x)| ≤(n|β|R
)|β|supBR(x)
|u|.
From this equation, the following is clear.
Theorem 2.4.1 (): Consider Ω bounded and u ∈ C2(Ω) ∩ C0(Ω). If u is harmonic in Ω,
then for any Ω′ ⊂⊂ Ω the following estimate holds
|Dβu(x)| ≤(n|β|dΩ′
)|β|sup
Ω
|u| ∀x ∈ Ω′, (2.7)
where dΩ′ = Dist (Ω′, ∂Ω).
2.4.1 Mean-Value and Maximum Principles
n = 1: u′′ = 0, u(x) = ax + b. u is affine.
u(x0) =1
2(u(x0 + r) + u(x0 − r))
= −∫
∂I
u(y) dS(y)
u(x0) = −∫
I
u(y) dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
34Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
( )r rx0
ax+b
x
I
Theorem 2.4.2 (Mean value property for harmonic functions): Suppose that u is har-
monic on U, and that B(x, r) ⊂ U. Then
u(x) = −∫
∂B(x,r)
u(y) dS(y)
= −∫
B(x,r)
u(y) dy
Proof:
Idea: Φ(r) = −∫∂B(x,r) u(y) dS(y); Prove that Φ′(r) = 0.
d
drΦ(r) =
d
dr−∫
∂B(x,r)
u(y) dS(y)
=d
dr−∫
∂B(0,1)
u(x + rz) dS(z)
= −∫
∂B(0,1)
Du(x + rz) · z dS(z)
= −∫
∂B(x,r)
Du(y) · y − xr
dS(y)
= −∫
∂B(x,r)
Du(y) · ν(y) dS(y) (See below figure)
=1
|∂B(x, r)|
∫
B(x,r)
div Du(y)︸ ︷︷ ︸∆u=0
dy
∴ Φ′(r) = 0
Theorem 2.4.3 (Maximum principle for harmonic functions): u harmonic (u ∈ C2(Ω) ∩C(Ω),∆u = 0) in open bounded Ω. Then
maxu∈Ω
u = max∂Ω
u
Moreover, if Ω is connected and if the maximum of u is obtained in Ω, then u is constant.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions35
x
y−xr= ν(y)
y
y − x
r
Remark:
• The second statement is also referred to as the strong maximum principle.
• The assertion holds also for the minimum of u, replace u by −u.
Proof: We prove the strong maximum principle.
Suppose that M = maxΩ u is attained at x0 ∈ Ω, then exists a ball B(x0, r) ⊂ Ω, r > 0.
Mean-value property:
M = u(x0) = −∫
B(x0,r)
u(y) dy
≤ −∫
B(x0,r)
M dy = M
=⇒ we have equality.
=⇒ u ≡ M on B(x0, r)
If u(z) < M for a point z ∈ B(x0, r), then we get a strict inequality.
M = x ∈ Ω, u(x) = MM is a closed set in Ω since u is constant; M is also open in Ω since it can be viewed as
the union of open balls.
=⇒ M = Ω, u is constant in Ω. Application of maximum principles: uniqueness of solutions of PDEs.
Theorem 2.4.4 (): There is at most one smooth solution of the boundary-value problem
−∆u = f in Ω
u = g on ∂Ω
Proof: Suppose u1 and u2 are solutions, take u1 = u2
maxΩ
(u1 − u2) = max∂Ω
(u1 − u2) = 0
=⇒ u1 − u2 = in Ω
Same argument for −(u1 − u2) gives
−(u1 − u2) ≤ in Ω
=⇒ |u1 − u2| ≤ 0in Ω
=⇒ u1 = u2 in Ω
=⇒ there exists at most one solution!
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
36Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
0x
2.4.2 Regularity of Solutions
Goal: Harmonic functions are analytic (they can be expressed as convergent power series).
• Need that u is infinitely many times differentiable.
• Need estimates for the derivatives of u.
Next goal: Harmonic functions are smooth.
A way to smoothen functions is mollification.
Idea: (weighted) averages of functions are smoother.
Definition 2.4.5:
η(x) =
Ce
1
|x |2−1 |x | < 1
0 |x | ≥ 1
Fact: η is infinitely many times differentiable, η ≡ 0 outside B(0, 1).
We fix C such that∫Rn η(y) dy = 1.
ηε(x) =1
εnη(xε
)
• ηε = 0 outside B(0, ε)
•∫Rn ηε(y) dy = 1
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions37
If u is integrable, then
uε(x) :=
∫
Rnηε(x − y)u(y) dy
=
∫
Rnηε(y)u(x − y) dy
=
∫
B(x,ε)
ηε(x − y)u(y) dy
is called the mollification of u.
Theorem 2.4.6 (Properties of uε): • uε is C∞ (infinitely many times differentiable)
• uε → u (pointwise) a.e. as ε→ 0
• If u is continuous then uε → u uniformly on compact subsets
Now since Φ′(r) = 0,
=⇒ Φ(r) = limρ→0
Φ(ρ)
= limρ→0−∫
∂B(x,ρ)
u(y) dS(y)
= u(x)
∴ Φ(r) = u(x) ∀r such that B(x, r) ⊂ U.
−∫
B(x,r)
u(y) dy =1
|B(x, r)|
∫ r
0
∫
∂B(x,ρ)
u(y) dS(y)dρ
=1
|B(x, r)|
∫ r
0
|∂B(x, ρ)| −∫
∂B(x,ρ)
u(y) dS(y)
︸ ︷︷ ︸u(x)
dρ
=u(x)
|B(x, r)|
∫ r
0
nα(n)ρn−1 dρ
=u(x)
rnα(n)nα(n)
1
nρn
∣∣∣∣∣
r
0
= u(x).
Note: nα(n) is the surface are of ∂B(0, 1) in Rn.
Complex differentiable =⇒ analytic
f (z) =
∞∑
n=1
ann!
(z − z0)n
an =n!
2πi
∮
γ
f (ω)
(ω − z0)n+1dω = f (n)(z)
Theorem 2.4.7 (): If u ∈ C(Ω), Ω bounded and open, satisfies the mean-value property,
then u ∈ C∞(Ω)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
38Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
ǫ
ǫ
Ωǫ
Ω
Proof: Take ε > 0
Ωε = x ∈ Ω : dist(x, ∂Ω) > ε
In Ωε : uε(x) =
∫
Rnηε(x − y)u(y) dy
η(y) =
Ce
1
|y |2−1 |y | ≤ 1
0 |y | > 1
uε(x) =1
εn
∫
B(x,ε)
η
( |x − y |ε
)u(y) dy
=1
εn
∫ ε
0
∫
∂B(x,ρ)
η(ρε
)u(y) dS(y)dρ
=1
εn
∫ ε
0
η(ρε
)|∂B(x, ρ)| −
∫
∂B(x,ρ)
u(y) dS
︸ ︷︷ ︸u(x)
dρ
=u(x)
εn
∫ ε
0
η(ρε
)nα(n)ρn−1 dρ
=u(x)
εn
∫ ε
0
∫
∂B(x,ρ)
η(ρε
)dSdρ
= u(x)
∫
Rnηε(y) dy
= u(x)
In Ωε, uε(x) = u(x)
=⇒ u ∈ C∞(Ωε)
ε > 0 arbitrary =⇒ u ∈ C∞(Ω).
Corollary 2.4.8 (): If u ∈ C(Ω) satisfies the mean-value property, then u is harmonic.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions39
Proof: Mean-value property =⇒ u ∈ C∞(Ω). Our proof of harmonic =⇒ mean-value
property gave
Φ′(r) =r
n−∫
B(x,r)
∆u(y) dy,
where Φ(r) = −∫
∂B(x,r)
u(y) dy
Now Φ′(r) = 0 and thus
−∫
B(x,r)
∆u(y) dy = 0
Since ∆u is smooth, this is only possible if ∆u = 0 i.e. u is harmonic.
2.4.3 Estimates on Harmonic Functions
Theorem 2.4.9 (Local estimates for harmonic functions): Suppose u ∈ C2(Ω), u har-
monic. Then
|Dαu(x0)| ≤ Ckrn+k
∫
B(x0,r)
|u(y)| dy, |α| = k
where
Ck =
1
α(n) k = 0(2n+1nk)k
α(n) k = 1, 2, . . .
and B(x0, r) ⊂ Ω.
Remarks:
|Dα(x0)| ≤ C
r k−∫
B(x0,r)
|u(y)| dy |α| = k,
Proof: (by induction)
k = 0:
u(x0) = −∫
B(x0,r)
u(y) dy
∴ |u(x0)| ≤ 1
α(n)
1
rn
∫
B(x0,r)
|u(y)| dy
k = 1: u harmonic =⇒ u ∈ C∞(Ω)
∆uxi =∂
∂xi∆u = 0,
i.e., all derivatives of u are harmonic. Use mean value property for ∂∂xiu = uxi on B
(x0,
r2
)
uxi = −∫
B(xo , r2 )uxi︸︷︷︸
div u
dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
40Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
r2
r
r2
x0
Note: uxi = div (0, . . . , 0, u, 0, . . . , 0) where u is in the ith component.
Use divergence theorem:
uxi =1∣∣B(x0,
r2
)∣∣∫
∂B(x0,r2 )u(y)νi dS(y)
≤ 2n
α(n)rn
∣∣∣∂B(x0,
r
2
)∣∣∣︸ ︷︷ ︸nα(n) r
n−1
2n−1
maxy∈∂B(x0,
r2 )|u(y)|
≤ 2n
r
1
α(n)
1(r2
)n maxy∈∂B(x0,
r2 )
∫
B(y, r2 )|u(z)| dz
≤ 2n+1n
α(n)rn+1
∫
B(x0,r)
|u(y)| dy
k ≥ 2:
|α| = k , then
Dαu(x) = Dβ∂
∂xiu(x),
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions41
|β| = k − 1 for some 1 ≤ i ≤ n.
x0
r
rk(k − 1)
rk
Dαu is harmonic. MVF for Dαu on B(x0,
rk
)
|Dαu(x)| =
∣∣∣∣∣∣∣∣−∫
B(x0,rk )Dαu(y)︸ ︷︷ ︸∂∂xiDβu
dy
∣∣∣∣∣∣∣∣
=1∣∣B(x0,
rk
)∣∣
∣∣∣∣∣
∫
∂B(x0,rk )Dβu · νi dS
∣∣∣∣∣
≤∣∣∂B
(x0,
rk
)∣∣∣∣B(x0,
rk
)∣∣ maxy∈∂B(x0,
rk )|Dβu(u)|
=nα(n)
(rk
)n−1
α(n)(rk
)n maxy∈∂B(x0,
rk )|Dβu(u)|
=nk
rmax
y∈∂B(x0,rk )|Dβu(u)|
For y ∈ ∂B(x0,
rk
), we have B
(y ,
(k−1)rk
)⊂ B(x0, r) and we use the induction hypothesis
on this ball for
|Dβu(y)| ≤ 1(
(k−1)rk
)n+k−1
(2n+1n(k − 1))k−1
α(n)
∫
B(x0,r)
|u(t)| dt
=1
rn+k−1
(k
k − 1
)n+k−1 (2n+1n(k − 1))k−1
α(n)
∫
B(x0,r)
|u(t)| dt
=1
rn+k−1
(k
k − 1
)n (2n+1nk)k−1
α(n)
∫
B(x0,r)
|u(t)| dt
≤ 1
rn+k−12n
(2n+1nk)k−1
α(n)
∫
B(x0,r)
|u(t)| dt
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
42Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
=⇒ |Dαu(x0)| ≤ nk
r
1
rn+k−12n
(2n+1nk)k−1
α(n)
∫
B(x0,r)
|u(t)| dt
≤ 1
rn+k
(2n+1nk)k
α(n)︸ ︷︷ ︸Ck
∫
B(x0,r)
|u(t)| dt
As a consequence of this, we get
Theorem 2.4.10 (Liouville’s Theorem): If u is harmonic on Rn and u is bounded, then u
is constant.
Proof:
|Du(x0)| ≤ C
rn+1
∫
B(x0,r)
|u(y)| dy
≤ C
rn+1M|B(x0, r)|,where M := sup
y∈Rn|u(y)|
≤ C
rM
where B(x0, r) ⊆ Rn.
As r →∞, Du(x0) = 0.
=⇒ u is constant.
2.4.4 Analyticity of Harmonic Functions
Theorem 2.4.11 (Analyticity of harmonic functions): Suppose that Ω is open (and bounded)
and that u ∈ C2(Ω), u harmonic in Ω. Then u is analytic i.e. u can be expanded locally into
a convergent power series.
Proof:
2r
r4r
Ω
Suppose that x0 ∈ Ω, show that u can be expanded into a power series at x0. i.e.
u(x) =∑
α
1
|α|Dαu(x0)(x − x0)α
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions43
Let
r =1
4dist(x0, ∂Ω) and let
M =1
α(n)rn
∫
B(x0,2r)
|u(y)| dy
Local estimates: ∀x ∈ B(x0, r), we have B(x, r) ⊂ B(x0, 2r) ⊂ Ω, and thus
maxx∈B(x0,r)
|Dαu(x)| ≤ M1
r k(2n+1nk)k
= M
(2n+1nk
r
)k
= M
(2n+1n
r
)|α||α||α|
Stirling’s formula:
limk→∞
√kkk
k!ek=
1√2π
=⇒ kk ≤ C√2π
k!ek√k
≤ C1k!ek
Multinomial theorem:
nk =
(n∑
i=1
1
)k
=∑
|α|=k
|α|!α!
Multinomal Thm.
≥ |α|!α!
for every α with |α| = k
=⇒ |α|! ≤ nkα!
maxx∈B(x0,r)
|Dαu(x)| ≤ M
(2n+1n
r
)|α||α||α|
≤ CM
(2n+1n2
r
)|α||α|!e |α|
= CM
(2n+1en2
r
)|α||α|!
Taylor Series:
∞∑
N=0
∑
|α|=N
|Dαu(x0)|α!
(x − x0)α ≤∞∑
N=0
∑
|α|=NCM
(2n+1en2
r
)N|x − x0|α
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
44Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
——
there are no more that nN multiindices α with |α| = N, thus
——
≤∞∑
N=0
CM
(2n+1en2
r
)NnN |x − x0|α
≤∞∑
N=0
CM
(2n+1en3
r
)N|x − x0|N
≤∞∑
N=0
CMqN |q| < 1
The last inequality is true only if the series is converging, i.e. for
2n+1en3
r|x − x0| < 1
=⇒ |x − x0| <r
2n+1en3
Result: The Taylor series converges in a ball with radius R = r2n+1en3 which only depends on
dist(x0, ∂Ω) and dimension.
2.4.5 Harnack’s Inequality
Theorem 2.4.12 (Harnack’s Inequality): Suppose that Ω is open and that Ω′ ⊂⊂ Ω (Ω′
is compactly contained in Ω), and Ω′ is connected. Suppose that u ∈ C2(Ω) and harmonic,
and nonnegative, then
supΩ′u ≤ C inf
Ω′u
and this constant does not depend on u.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.4 Estimates for Derivatives of Harmonic Functions45
x
2r
Ω′u(x0) = 0
Ω
4r
This means that non-negative harmonic functions can not have wild oscillations on Ω′.
Remark: infΩ′ u = 0, then supΩ′ u = 0 =⇒ u ≡ 0 on Ω′.
Ω
u(x0) = 0
Ω′
This follows also from the minimum principle for harmonic functions.
Proof of Harnack:
Let 4r = dist(Ω′, ∂Ω) and take x, y ∈ Ω′, |x−y | ≤ r =⇒ B(x, 2r) ⊂ Ω, B(y , r) ⊆ B(x, 2r).
u(x) = −∫
B(x,2r)
u(z) dz
≥ 1
|B(x, 2r)|
∫
B(y,r)
u(z) dz
=|B(y , r)||B(x, 2r)|−
∫
B(y,r)
u(z) dz
=1
2nu(y)
i.e.
u(x) ≥ 1
2nu(y)
Same argument interchanging x and y yields
u(y) ≥ 1
2nu(x)
=⇒ 2nu(x) ≥ u(y) ≥ 1
2nu(x) ∀x, y ∈ Ω′, |x − y | ≤ r
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
46Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
y
Ω
Ω′
x
xk
x0
x
y0
x1x2
y
Ω′ is compact, cover Ω′ by at most N balls B(xi , r). Take x, y ∈ Ω′, take a curve γ joining
x and y .
Chain of balls covering γ, B(x0, r), . . . , B(xk , r), where k ≤ N. Now, |x − x0| ≤ r
u(x) ≥ 1
2nu(x0)
≥ 1
2n
(1
2nu(y0)
)
=1
22nu(y0)
≥ 1
23nu(x1)
...
≥ 1
2(2k+2)nu(y)
≥ 1
2(2N+2)nu(y)
∀x, y ∈ Ω′ we get
u(x) ≥ 1
2(2N+2)nu(y)
=⇒ Take xi such that u(xi)→ infx∈Ω′ u(x)
yi such that u(yi)→ supx∈Ω′ u(x)
=⇒ assertion.
Properties of harmonic functions
• mean-value propertyGregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions47
• maximum principle
• regularity
• local estimates
• analyticity
• Harnack’s inequality
2.5 Existence of Solutions
∆u = 0 in Ω
u = g on ∂ΩLaplace’s equation
−∆u = f in Ω
u = g on ∂ΩPoisson’s equation
• Green’s function → general representation formula
• Finding Green’s function for special domains, half-space and a ball
• Perron’s Method - get existence on fairly general domains from the existence on balls
2.5.1 Green’s Functions
We have in Rn
u(x) =
∫
RnΦ(y − x)f (y) dy
solves
−∆u = f in Rn
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
48Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Issue: Construct fundamental solutions with boundary conditions.
General representation formula for smooth functions u ∈ C2
Ω is a bounded, open domain. x ∈ Ω, ε > 0 such that B(x, ε) ⊂ Ω. Define Ωε = Ω\B(x, ε).
xǫ
Ω
0 =
∫
Ωε
u(y)∆Φ(y − x) dy
=
∫
∂Ωε
u(y)DΦ(y − x) · ν dS(y)
−∫
Ωε
Du(y) ·DΦ(y − x) dy
=
∫
∂Ωε
[u(y)DΦ(y − x) · ν −Du(y) · νΦ(y − x)] dS(y)
+
∫
Ωε
∆u(y)Φ(y − x) dy
Volume term for ε→ 0 gives
∫
Ω
∆u(y)Φ(y − x) dy
singularity of Φ integrable.
boundary terms:
=
∫
∂Ω
[u(y)DΦ(y − x) · ν −Du(y) · νΦ(y − x)] dS(y)
+
∫
∂B(x,ε)
[u(y)DΦ(y − x) · ν︸ ︷︷ ︸I1
−Du(y) · νΦ(y − x)︸ ︷︷ ︸I2
] dS(y)
Remembering Du(y) ≤ Cεn−1 and
Φ ≤
Cεn−2 n ≥ 3
ln ε n = 2,
we see that
I2 ≤
ε n ≥ 3
ε ln ε n = 2
=⇒ I2 → 0 as ε→ 0.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions49
Ω
x
ǫ
I1 =
∫
∂B(x,ε)
[u(y)DΦ(y − x) · νin dS(y)
DΦ(z) · νout =−1
nα(n)εn−1, z ∈ ∂B(0, ε)
DΦ(y − x) · νin =−1
nα(n)εn−1
I1 =
∫
∂B(x,ε)
u(y)1
nα(n)εn−1dS(y)
= −∫
∂B(x,ε)
u(y) dS(y)→ u(x) as ε→ 0
Putting things together:
0 =
∫
∂Ω
[u(y)DΦ(y − x) · ν −Du(y) · νΦ(y − x)] dS(y)
+u(x) +
∫
Ω
∆u(y)Φ(y − x) dy
=⇒ u(x) =
∫
∂Ω
[Du(y) · νΦ(y − x)− u(y)DΦ(y − x) · ν] dS(y)
−∫
Ω
∆u(y)Φ(y − x) dy
Recall: Existence of harmonic functions.
Green’s functions: Ω, u ∈ C2
0 =
∫
Ωε
u(y)∆Φ(y − x) dy
Representation of u(x)
u(x) = −∫
∂Ω
[u(y)
∂Φ
∂ν(y − x)− ∂u
∂ν(y)Φ(y − x)
]dS(y)
−∫
Ω
∆u(y)Φ(y − x) dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
50Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Green’s function = fundamental solution with zero boundary data.
Idea: Correct the boundary values of Φ(y − x) by a harmonic corrector to be equal to
zero.
Green’s function G(x, y) = Φ(y − x)− φx(y),
where
∆yφ
x(y) = 0 in Ω
φx(y) = Φ(y − x) on ∂Ω
Same calculations as before:
u(x) = −∫
∂Ω
u(y)∂G
∂ν(x, y) dS(y)
−∫
Ω
G(x, y)∆u(y) dy
Expectation: The solution of
∆u = 0 in Ω
u = g on ∂Ω
is given by
u(x) = −∫
∂Ω
g(y)∂G
∂ν(x, y) dS(y)
Theorem 2.5.1 (Symmetry of Green’s Functions): ∀x, y ∈ Ω, x 6= y , G(x, y) = G(y , x).
Proof: Define
v(z) = G(x, z)
w(z) = G(y , z)Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions51
v harmonic for z 6= x , w harmonic for z 6= y .
v , w = 0 on ∂Ω.
xyǫ
Ω
ǫ
ε small enough such that B(x, ε) ⊂ Ω and B(y , ε) ⊂ Ω.
Ωε = Ω \ (B(x, ε) ∪ B(y , ε))
0 =
∫
Ωε
∆v · w dz
=
∫
∂Ωε
∂v
∂νw dS −
∫
Ωε
Dv ·Dw dz
=
∫
∂Ωε
(∂v
∂νw − v ∂w
∂ν
)dS
+
∫
Ωε
v · ∆w dz
Now, the last term on the RHS is 0. Thus
∫
∂B(x,ε)
(∂v
∂νw − v ∂w
∂ν
)dS =
∫
∂B(y,ε)
(v∂w
∂ν− ∂v∂νw
)dS
Now, since w is nice near x , we have∣∣∣∣∫
∂B(x,ε)
∂w
∂νv dS
∣∣∣∣ ≤ Cεn−1 sup∂B(x,ε)
|v | = o(1) as ε→ 0
On the other hand, v(z) = Φ(z − x)− φx(z), where φx is smooth in U. Thus
limε→0
∫
∂B(x,ε)
∂v
∂νw dS = lim
ε→0
∫
∂B(x,ε)
∂Φ
∂ν(x, z)w(z) dS = w(x)
by previous calculations. Thus, the LHS of (2.8) converges to w(x) as ε→ 0. Likewise the
RHS converges to v(y). Consequently
G(y , z) = w(x) = v(y) = G(x, y)
2.5.2 Green’s Functions by Reflection Methods
2.5.2.1 Green’s Function for the Halfspace
Rn+ = x ∈ Rn : xn ≥ 0
u(x) = −∫
∂Ω
u(y)∂v
∂ν(y − x) dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
52Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
In this case: ∂Ω = ∂Rn+ = x ∈ Rn : xn = 0
Idea: reflect x into x = (x1, . . . , xn−1,−xn) and to take φx(y) = Φ(y − x)
G(x, y) = Φ(y − x)− φx(y)
= Φ(y − x)−Φ(y − x)
This works since x ∈ Rn+, x /∈ Rn+For y ∈ ∂Rn+, yn = 0, |y − x | = |y − x |. Thus, Φ(y − x) = Φ(y − x) for y ∈ ∂Rn+. Guessing
from what we showed on bounded domains:
u(x) = −∫
∂Ω
g(y)∂G
∂ν(x, y) dy,
where
∂G
∂ν= − ∂G
∂yn
Rn−1
R
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions53
n ≥ 3:
Φ(z) =1
n(n − 2)α(n)
1
|z |n−2
=⇒ DΦ =−1
nα(n)
z
|z |n
− ∂G∂yn
(y − x) =1
nα(n)
yn − xn|y − x |n
− ∂G∂yn
(y − x) =1
nα(n)
yn + xn|y − x |n
So if y ∈ ∂Rn+,
∂G
∂ν(x, y) = − ∂G
∂yn(x, y) = − 2xn
nα(n)
1
|x − y |n
So our expectation is
u(x) = −∫
∂Rn+g(y)
∂G
∂ν(x, y) dy
=2xnnα(n)
∫
∂Rn+
g(y)
|x − y |n dy. (2.8)
This is Poisson’s formula on Rn+.
K(x, y) =2xn
nα(n)|x − y |n
is called Poisson’s kernel on Rn+.
Theorem 2.5.2 (Poisson’s formula on the half space): g ∈ C0(∂Rn+), g is bounded and
u(x) is given by (2.8), then
i.) u ∈ C∞(Rn+)
ii.) ∆u = 0 in Rn+
iii.) limx→x0 u(x) = g(x0) for x0 ∈ ∂Rn+Proof: y 7→ G(x, y) is harmonic x 6= y .
i.) symmetric G(x, y) = G(y , x)
=⇒ G is harmonic in its first argument x 6= y
=⇒ derivatives of G are harmonic for x 6= y
=⇒ our representation formula involves
∂νG(x, y) y ∈ ∂Rn+, x ∈ Rn+ =⇒ x 6= y
=⇒ ∂νG is harmonic for x ∈ Rn+.
ii.) u is differentiable
– integrate on n − 1 dimensional surface
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
54Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
– decay at ∞ is of order 1|x |n =⇒ the function decays fast enough to be integrable,
and the same is true for its derivatives (since g is bounded). =⇒ differentiate
under the integral, =⇒ ∆u(x) = 0.
iii.) Boundary value for u. Fix x0 ∈ ∂Rn+, choose ε > 0, choose δ > 0 small enough such
that |g(x) − g(x0)| < ε if |x − x0| < δ, x ∈ ∂Rn+, by continuity of g. For x ∈ Rn+,
|x − x0| < δ2
|u(x)− g(x0)| =
∣∣∣∣∣2xn
mα(n)
∫
Rn+
g(y)− g(x0)
|x − y |n dy
∣∣∣∣∣ (2.9)
since∫
∂Rn+K(x, y) dy = 1 (HW# 4) (2.10)
(2.9) ≤∫
∂Rn+∩B(x0,δ)
K(x, y) |g(y)− g(x0)|︸ ︷︷ ︸≤ε
dy
︸ ︷︷ ︸≤ε by (2.10)
+
∫
∂Rn+\B(x0,δ)
K(x, y)|g(y)− g(x0)| d
R
y
x
x0
δ2
Rn−1
δ
|x − x0| <δ
2|x0 − y | > δ
|y − x0| ≤ |y − x |+ |x − x0|
≤ |y − x |+ δ
2
≤ |y − x |+ 1
2|x0 − y |
=⇒ 1
2|y − x0| ≤ |y − x |
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions55
2nd integral
=
∫
∂Rn+\B(x0,δ)
2xnnα(n)
|g(y)− g(x0)||x − y |n dy
≤ 4x0n max |g|nα(n)
∫
∂Rn+\B(x0,δ)
(2
|x0 − y |
)ndy
≤ 4x0n max |g|nα(n)
C(δ) ≤ ε
If x0n is sufficiently small. Two estimates together: |u(x) − g(x0)| ≤ 2ε if f |x − x0| is
small enough. =⇒ continuity at x0 u(x0) = g(x0) as asserted.
2.5.2.2 Green’s Function for Balls
1. unit ball
2. scale the result to arbitrary balls
3. inversion in the unit sphere
O
x · x = |x |2|x |2 = 1
x = x|x |2 x 6= 0
x
1
Φ(y − x) =
− 1
2π ln |y − x | n = 21
n(n−2)α(n)1
|x−y |n−2 n ≥ 3
For x = 0 this simplifies to
− 1
2π ln |y |1
n(n−2)α(n)1
|y |n−2
=︸︷︷︸|y |=1
0
1n(n−2)α(n)
We choose Φ(y) to be this constant, and we assume x 6= 0 from now on. For x ∈ B(0, 1),
x /∈ B(0, 1), and Φ centered at x is a good function.
n ≥ 3 y 7→ Φ(y − x) is harmonic
y 7→ |x |2−nΦ(y − x) is harmonic
y 7→ Φ(|x |(y − x)) is harmonic
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
56Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Corrector φx(y) = Φ(|x |(y − x)) harmonic in B(0, 1)
|y | = 1⇐= y ∈ ∂B(0, 1) =⇒ |x |2|y − x |2
= |x |2∣∣∣∣y −
x
|x |2∣∣∣∣2
= |x |2∣∣∣∣|y |2 −
2(y , x)
|x |2 +|x |2|x |4
∣∣∣∣= |x |2 − 2(y , x) + |y |2
= |x − y |2
=⇒ Φ(|x |(y − x)) = Φ(y − x) if y ∈ ∂B(0, 1)
Definition 2.5.3: G(x, y) = Φ(y − x)−Φ(|x |(y − x)) is the Green’s function on B(0, 1).
Expectation: The solution of
∆u = 0 in B(0, 1)
u = g on ∂B(0, 1)
is given by
u(x) = −∫
∂B(0,1)
∂νG(x, y)g(y) dy
=1− |x |2nα(n)
∫
∂B(0,1)
g(y)
|x − y |n dy
Poisson’s formula on B(0, 1)
K(x, y) =1− |x |2nα(n)
1
|x − y |n
is called Poisson’s kernel on B(0, 1)
So now, we want to scale this result.
DΦ(z) =−1
nα(n)
z
|z |n∂Φ
∂y(y − x) =
−1
nα(n)
y − x|y − x |n
∂Φ
∂y(|x |(y − x)) =
−1
nα(n)
|x |(y − x)
||x |(y − x)|n |x |
on ∂B(0, 1). |y − x |2 = ||x |(y − x)|2
=⇒ ∂Φ
∂y(|x |(y − x)) =
−1
nα(n)
|x |2(y − x
|x |2)
|y − x |n∂G
∂y(y , x) =
−1
nα(n)
[y − x|y − x |n −
y |x |2 − x|y − x |n
]
=−1
nα(n)
[y(1− |x |2)
|y − x |n]
∂G
∂ν= y
∂G
∂y(x, y) =
−1
nα(n)
1− |x |2|x − y |n
Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions57
This gives for a solution of ∆u = 0 in B(0, 1), u(x) = g(x) on ∂B(0, 1)
u(x) =1− |x |2nα(n)
∫
∂B(0,1)
g(y)
|x − y |n DS(y)
Transform this to balls with radius r
Solve ∆u = 0 in B(0, r)
u = g on ∂B(0, r)
Define v(x) = u(rx) on B(0, 1)
=⇒ ∆v(x) = r2∆u(rx) = 0
v(x) = u(rx) = g(rx) for x ∈ ∂B(0, 1)
Poisson’s formula on B(0, 1)
v(x) =1− |x |2nα(n)
∫
∂B(0,1)
g(ry)
|x − y |n dS(y) for x ∈ B(0, 1)
u(x) = v(xr
)
=1−
∣∣ xr
∣∣2
nα(n)
∫
∂B(0,r)
g(y)∣∣ xr −
yr
∣∣ndS(y)
rn−1
=r2 − |x |2nα(n)r
∫
∂B(0,r)
g(y) dS(y)
u(x) =r2 − |x |2nα(n)r
∫
∂B(0,r)
g(y)
|x − y |n dS(y) (2.11)
Poisson’s formula on B(0, r)
Theorem 2.5.4 (): Suppose that g ∈ C(∂B(0, x)). then u(x) given by (2.11) has the
following properties:
i.) u ∈ C∞(B(0, r))
ii.) ∆u = 0
iii.) ∀x0 ∈ ∂B(0, r)
limx→x0
u(x) = g(x0), x ∈ B(0, r)
Proof: Exercise, similar to the proof in the half-space.
2.5.3 Energy Methods for Poisson’s Equation
Theorem 2.5.5 (): There exists at most one solution of
−∆u = f in Ω
u = g on ∂Ω
if ∂Ω is smooth, f , g continuous.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
58Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Proof: Two solutions u1 and u2. Then define w = u1 − u2, thus
−∆w = 0 in Ω
w = 0 on ∂Ω
0 =
∫
Ω
−∆w · w dx
= −∫
∂Ω
Dw · ν w︸︷︷︸=0
dS(y) +
∫
Ω
|Dw |2 dx
=
∫
Ω
|Dw |2 dx
=⇒ Dw = 0 in Ω
=⇒ w = constant in Ω
=⇒ w = 0 on ∂Ω
=⇒ w = 0 in Ω
=⇒ u1 = u2 in Ω.
2.5.4 Perron’s Method of Subharmonic Functions
• Poisson’s formula on balls
• compactness argument
• construct a solution of ∆u = 0 in Ω
• Check boundary data, barriers.
Assumption: Ω open, bounded and connected.
Definition 2.5.6: A continuous function u is said to be subharmonic on Ω, if for all balls
B(x, r) ⊂⊂ Ω, and all harmonic functions h with u ≤ h on ∂B(x, r) we have u ≤ h on
B(x, r).
Definition 2.5.7: Superharmonic is analogous.
Proposition 2.5.8 (): Suppose u is subharmonic in Ω. Then u satisfies a strong maximum
principle. Moreover, if v is superharmonic on Ω with u ≤ v on ∂Ω, then either u < v on Ω,
or u = v on Ω.
Proof: u satisfies a mean-value property:
B(x, r) ⊂⊂ Ω. Let h be the solution of
∆h = 0 in B(x, r)
h = u on ∂B(x, r)
Then
u(x) ≤ h(x) = −∫
∂B(x,r)
h(y) dy = −∫
∂B(x,r)
u(y) dy
Idea: Local modification of subharmonic functions.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions59
Definition 2.5.9: u subharmonic on Ω, B ⊂⊂ Ω. Let u be the solution of
∆u = 0 in B
u = u on ∂B
Define
U(x) =
u(x) in B
u(x) on Ω \ B
U is called the harmonic lifting of u on B. U subharmonic.
In 1D:
harmonic
u
Proposition 2.5.10 (): If u1, . . . , un are subharmonic, then u = maxu1, . . . , un is subhar-
monic.
Idea: Φ is bounded on ∂Ω. A continuous subharmonic function u is called a subfunction
relative to Φ if u ≤ Φ on ∂Ω.
Define superfunction analogously
inf∂Ω
Φ subfunction
sup∂Ω
Φ superfunction
Theorem 2.5.11 (): Φ bounded function on ∂Ω.
SΦ = v : v subfunction relative to Φ
and define u = supv∈SΦv . Then u is harmonic in Ω.
Remark: u is well-defined since w(x) = sup∂Ω Φ is a superfunction.
All subfunction ≤ all superfunctions on ∂Ω, and thus by Proposition 2.20, all subfunctions
≤ all superfunctions on Ω. Thus,
all subfunctions ≤ sup∂Ω
Φ <∞.
Proof: u is well-defined. Fix y ∈ Ω, there exists vk ∈ SΦ such that vk(y) → u(y). Replace
vk by maxvk , inf Φ if needed to ensure that vk is bounded from below. Choose
B(y , R) ⊂⊂ Ω, R > 0.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
60Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
vk = harmonic lifting of vk on B(y , R)
vk uniformly bounded
vk(y)→ u(y)
vk is harmonic in B(y , R).
HW1 (compactness result): ∃ subsequence vki that converges uniformly on all compact sub-
sets on B(y , R) to a harmonic function v . By construction, v(y) = u(y).
Assertion: u = v on B(y , R). =⇒ u harmonic on B(y , R) =⇒ u harmonic in Ω
Clearly: v ≤ u in B(y , R)
Suppose otherwise:
∃z ∈ B(y , R), v(z) < u(z)
=⇒ ∃u ∈ SΦ such that v(z) < u(z) ≤ u(z)
Take wL = maxu, vki
WL = harmonic lifting of wL on B.
As before, ∃ subsequence of WL that converges to a harmonic function w on B, by
construction:
v ≤ w ≤ u
v(y) = w(y) = u(y)
Maximum principle =⇒ v = w .
However, w(z) > v(z), contradiction.
=⇒ u = v .
2.5.4.1 Boundary behavior
Definition 2.5.12: Let ξ ∈ ∂Ω. A function ω ∈ C(Ω) is called a barrier at ξ relative to Ω if
i.) w is superharmonic in Ω
ii.) w > 0 in Ω \ ξ
Theorem 2.5.13 (): Φ bounded on ∂Ω. Let u be Perron’s solution of ∆u = 0 as constructed
before. Assume that ξ is a regular boundary point, i.e., there exists a barrier at ξ, and that
Φ is continuous at ξ. Then u is continuous at ξ and u(x)→ Φ(ξ) as x → ξ.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.5 Existence of Solutions61
w(ξ) = 0
Ω
w > 0ξ
Proof: M = sup∂Ω |Φ|. w = barrier at ξ. Φ continuous at ξ. Fix ε > 0, ∃δ > 0 such that
|Φ(x)−Φ(ξ)| < ε if |x − ξ| < δ, x ∈ ∂Ω. Choose k > 0 such that kw(x) > 2M for x ∈ Ω,
and |x − ξ| > δ.
x 7→ Φ(ξ)− ε− kw(x) is a subfunction.
x 7→ Φ(ξ) + ε+ kw(x) is a superfunction.
w superharmonic =⇒ −w subharmonic.
By construction
Φ(ξ)− ε− kw(x) ≤ u(x)
≤ Φ(ξ) + ε+ kw(x)
⇐⇒ |u(x)−Φ(ξ)| ≤ ε+ kw(x)
and w(x)→ 0 as x → ξ
|u(x)−Φ(ξ)| < ε if |x − ξ| is small enough.
Theorem 2.5.14 (): Ω bounded, open, connected and all ξ ∈ ∂Ω regular. g continuous on
∂Ω =⇒ there exists a solution of
∆u = 0 in Ω
h = g on ∂Ω
Remark: A simple criterion for ξ ∈ ∂Ω to be regular is that Φ satisfies an exterior sphere
condition: there exists B(y , R), R > 0 such that B(y , R) ∩Ω = ξ
R
ξ
y
Barrier:
w(x) =
R2−n − |x − y |2−n n ≥ 3
ln |x−y |R n = 2
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
62Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
2.6 Solution of the Dirichlet Problem by the Perron Process
2.6.1 Convergence Theorems
Important to the proof of existence of solutions to Laplace’s equation is the concept of
convergence of harmonic functions.
Theorem 2.6.1 (): A bounded sequence of harmonic functions on a domain Ω contains a
subsequence which converges uniformly, together with its derivatives, to a harmonic function.
Proof: Take Ω′ ⊂ Ω compact. Now, we calculate via the mean value theorem of calculus:
|um(x)− un(y)| ≤ maxtx + (1− t)y
0 ≤ t ≤ 1
|Dum| · |x − y |
≤ maxΩ′|Dum| · |x − y |
≤ C supΩ′|um| · |x − y |
≤ C′(K, n, L)|x − y |,where L is a constant which bounds the sequence elements um. From this, we see that
the sequence is equicontinuous. Next we recall that the Azalea-Ascoli theorem asserts that
such a sequence will have a subsequence converging uniformly to a limit. Now, one may
apply the same calculation above iteratively to any order of derivative (iteratively refining the
subsequence for each new derivative). From this we conclude that there is a subsequence
whose derivatives up to and including order |β| all converge uniformly to a limit for any |β|.It now follows that the limit is indeed a solution to Laplace’s equation.
2.6.2 Generalized Definitions & Harmonic Lifting
Before proving the existence of solution to the Dirichlet problem for Laplace’s equation, one
needs to extend the notions of subharmonic(superharmonic) functions to non-C2 functions.
Definition 2.6.2: u ∈ C0(Ω) is subharmonic(superharmonic) in Ω if for any ball B ⊂⊂ Ω
and harmonic function h ∈ C2(B) ∩ C0(B) with h ≥ (≤) u on ∂B, one has h ≥ (≤) u on B.
Note: If u is subharmonic and superharmonic, then it is harmonic.
From this new definition, some properties are immediate.
Properties:
i.) The Strong Maximum Principle still applies to this extended definition. Since u is
subharmonic, u ≤ h for h harmonic in B with the same boundary values, the Strong
Maximum Principle follows by applying the Poisson Integral formula to this situation.
ii.) w(x) = maxm(minm)u1(x), . . . , um(x) is subharmonic(superharmonic) if u1, . . . , umare subharmonic(superharmonic).
iii.) The Harmonic Lifting on a ball B for a subharmonic function u on Ω is defined as
U(x) =
u(x) for x /∈ Bh(x) for x ∈ B ,
where h ∈ C2(B) ∩ C0(B) is harmonic in B with h = u on ∂B.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.6 Solution of the Dirichlet Problem by the Perron Process63
Note: The key thing to realize about the definition of harmonic lifting is that the “lifted”
part is harmonic in our classical C2 sense; it is simply the Poisson integral solution on the
ball with u taken as boundary data.
Proposition 2.6.3 (): U is subharmonic on Ω, when h is harmonic in B.
B′∩B
B′B
B′ \ B
Proof: Refer to the figure on the next page as you read this proof. Take another ball
B′ ⊂⊂ Ω along with another harmonic function h′ ∈ C2(B′) ∩ C0(B′) which is ≥ U on ∂B′.By definition of u(x), one has that h′ ≥ u on B′ \ B. This, in turn, leads to the assertion
that h′ ≥ U on ∂B ∩ B′, i.e. h′ ≥ U on ∂(B′ ∩ B). As, h′ − U is harmonic (in the classical
sense), we can apply the Weak Maximum Principle to ascertain that h′ ≥ U on B∩B′. Since
B′ is arbitrary, the proposition follows.
2.6.3 Perron’s Method
Now, we can start pursuit of the main objective of this section, namely to prove the existence
of the Dirichlet Problem:
∆u = 0 in Ω,Ω bounded domain in Rnu = φ on ∂Ω, φ ∈ C0(∂Ω)
The first step to doing this requires the following definition
Definition 2.6.4: A subfunction(superfunction) in Ω is subharmonic(superharmonic) in Ω
and is in C0(Ω) in addition to being ≤ φ on ∂Ω.
Now, we define
Sφ = all subfunctions,along with
u(x) = supv∈Sφ
v(x),
where the supremum is understood to be taken in the pointwise sense. u(x) is bounded as
v ∈ Sφ implies that v ≤ max∂Ω φ via the Weak Maximum Principle.
Remark: Essentially, the above definitions allow one to construct a weak formulation (or
weak solution) to the Dirichlet Problem via Maximum Principles. Indeed, at this point, it
has not been ascertained whether or not u is continuous.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
64Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Theorem 2.6.5 (): u is harmonic in Ω.
Proof: First, fix y ∈ Ω. Since u(y) = supSφ v , there is a sequence vm ⊂ Sφ such that
vm(y) → u(y). Of course, we may assume that vm is bounded from above by max∂Ω φ.
Now, consider a ball B ⊂⊂ Ω and replace all vm by their harmonic liftings on B. Denoting
the resulting sequence as Vm, it is clear that Vm ⊂ Sφ and that Vm(y) → u(y). Since
Vm are all harmonic (in the C2 sense) in B, we apply our harmonic convergence theorem
to extract a subsequence (again denoted Vm after relabeling) which converges uniformly in
compact subset of B to a harmonic function v in B. Obviously, we have v(y) = u(y), but
we now make the following
Claim: v = u in B.
Proof of Claim: Consider z ∈ B such that v(z) < u(z). This implies that there exists
a function v ∈ Sφ such that v(z) > v(z). Replace Vm by vm = maxVm, v. Next, take the
harmonic liftings in B of these to get Vm. Again, utilizing the harmonic convergence theo-
rem, we extract a subsequence (still denoted Vm after relabeling) such that Vm converges
locally uniformly to a harmonic function w > v in B, but w(y) = v(y) = u(y). Thus, by
the Strong Maximum Principle w = v , a contradiction.
Thus, v = u in B which implies that u is indeed harmonic.
Now, to complete our existence proof, we need to see if the Perron solution of the Dirichlet
Problem attains the proper boundary data. To do this we make the next definition.
Definition 2.6.6: A Barrier at y ∈ ∂Ω is a function w ∈ C0(Ω) that has the following
properties:
i.) w is superharmonic,
ii.) w(y) = 0,
iii.) w > 0 on Ω \ y.
Theorem 2.6.7 (): Let u be a Perron solution. Then u ∈ C0(Ω) with u = φ on ∂Ω if and
only if there exists a barrier at each point y ∈ ∂Ω.
Proof:
(Only if) Solve with the BC φ = |x − y |.
(If) Take ε > 0 which implies there exists a δ > 0 such that |φ(x) − φ(y)| < ε implies
|x − y | < δ by continuity of φ. Now, choose a constant K(ε) large enough so that
Kw > 2 · max∂Ω |φ| if |x − y | ≥ δ. Next, define w± := φ(y) ± (ε + Kw). It is clear
that w− is a subfunction and that w+ is a superfunction with w− ≤ u ≤ w+ on ∂Ω.
Application of the Weak Maximum Principle now yields
|u(x)− φ(y)| ≤ ε+Kw.
Taking ε→ 0, one sees that u(x)→ φ(y) as x → y (since w(x)→ 0 as x → y by the
definition). Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.7 Exercises65
Ω
S
y
Now, that it has been shown the existence of Perron solutions is equivalent to the existence
of barriers on ∂Ω, we wish to ascertain precisely what criterion on ∂Ω implies the existence
of barriers. With that, we make the following definition.
Definition 2.6.8: A domain Ω satisfies an exterior sphere condition if at every point y ∈ ∂Ω,
there exists a sphere S such that S ∩Ω = y (refer to the above figure).
Correspondingly, we have the following theorem.
Theorem 2.6.9 (): If Ω satisfies an exterior sphere condition, then there exists a barrier at
each point y ∈ ∂Ω.
Proof: Simply take
w(x) =
R2−n − |x − y |2−n for n > 2
ln R|x−y | for n = 2
,
as the barrier at y ∈ ∂Ω. It is trivial to verify this function is indeed a barrier at y ∈ ∂Ω.
Remarks:
i.) It is easy to see that if the boundary ∂Ω is smooth, in a sense that locally it is the
graph of a C2 function, then it satisfies an exterior sphere condition. Also, all convex
domains satisfy an exterior sphere condition.
ii.) There are more general criterion for the existence of barriers, like the exterior cone
condition; but the proofs are harder.
2.7 Exercises
2.1:
a) Find all solutions of the equation ∆u+ cu = 0 with radial symmetry in R3. Here c ≥ 0
is a constant.
b) Prove that
Φ(x) = −cos(√c |x |)
4π|x |
is a fundamental solution for the equation, i.e., show that for all f ∈ C2(R3) with
compact support a solution of the equation ∆u + cu = f is given by
u(x) =
∫
R3
Φ(x − y)f (y) dy.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
66Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Hint: You can solve the ODE you get in a) by defining w(r) = rv(r). Show that w solves
w ′′ + cw = 0.
2.2: [Evans 2.5 #4] We say that v ∈ C2(Ω) is subharmonic if
−∆v ≤ 0 in Ω.
a) Suppose that v is subharmonic. Show that
v(x) ≤ 1
|B(x, r)|
∫
B(x,r)
v(y) dy for all B(x, r) ⊂ Ω.
b) Prove that subharmonic functions satisfy the maximum principle,
maxΩv = max
∂Ωv .
c) Let φ : R → R be smooth and convex. Assume that u is harmonic and let v = φ(u).
Prove that v is subharmonic.
d) Let u be harmonic. Show that v = |Du|2 is subharmonic.
2.3: Let Rn+ be the upper half space, Rn+ = x ∈ Rn : xn > 0.
a) Suppose that u is a smooth solution of ∆u = 0 in Rn+ with u = 0 on ∂Rn+.
b) Use the result in a) to show that bounded solutions of the Dirichlet problem in the half
space Rn+,
∆u = 0 in Rn+,u = g on ∂Rn+,
are unique. Show by a counter example that the corresponding statement does not
hold for unbounded solutions.
2.4: [Qualifying exam 08/98] Suppose that u ∈ C2(R2+) ∩ C0(R2
+) is harmonic in R2+ and
bounded in R2+ ∪ ∂R2
+. Prove the maximum principle
supR2
+
u = sup∂R2
+
u
Hint: Consider ε > 0 the harmonic function
vε(x, y) = u(x, y)− ε ln(x2 + (y + 1)2).
2.5: Fundamental solutions and Green’s functions in one dimension.
a) Find a fundamental solution Φ of Laplace’s equation in one dimension,
−u′′ = 0 in (−∞,∞).
Show that
u(x) =
∫ ∞
−∞Φ(x − y)f (y) dy
is a solution of the equation −u′′ = f for all f ∈ C2(R) with compact support.Gregory T. von Nessi
Versio
n::3
11
02
01
1
2.7 Exercises67
b) Find the Green’s function for Laplace’s equation on the interval (−1, 1) subject to the
Dirichlet boundary conditions u(−1) = u(1) = 0.
c) Find the Green’s function for Laplace’s equation on the interval (−1, 1) subject to the
mixed boundary conditions u′(−1) = 0 and u(1) = 0, i.e., find a fundamental solution
Φ with Φ′(−1) = 0 and Φ(1) = 0.
d) Can you find the Green’s function for the Neumann problem on the interval (−1, 1),
i.e., can you find a fundamental solution Φ with Φ′(−1) = Φ′(1) = 0?
2.6: Use the method of reflection to find the Green’s function for the first quadrant R2++ =
x ∈ R2 : x1 > 0, x2 > 0 in R2. Find a representation of a solution u of the boundary value
problem
∆u = 0 in R2
++,
u = g on ∂R2++,
where g is a continuous and bounded function on ∂R2++.
2.7: Let Ω1 and Ω2 be open and bounded sets in Rn (n ≥ 3) with smooth boundary
that are connected. Suppose that Ω1 is compactly contained in Ω2, i.e., Ω1 ⊂ Ω2. Let
Gi(x, y) be the Green’s functions for the Laplace operator on Ωi , i = 1, 2. Show that
G1(x, y) < G2(x, y) for x, y ∈ Ω1, x 6= y .
2.8: [Evans 2.5 #6] Use Poisson’s formula on the ball to prove that
rn−2 r − |x |(r + |x |)n−1
u(0) ≤ u(x) ≤ rn−2 r + |x |(r − |x |)n−1
u(0),
whenever u is positive and harmonic in B(0, r). This is an explicit form of Harnack’s inequality.
2.9: [Qualifying exam 08/97] Consider the elliptic problem in divergence form,
Lu = −n∑
i ,j=1
∂i(ai j(x)∂ju)
over a bounded and open domain Ω ⊂ Rn with smooth boundary. The coefficients satisfy
ai j ∈ C1(Ω) and the uniform ellipticity condition
n∑
i ,j=1
ai j(x)ξiξj ≥ λ0|ξ|2 ∀x ∈ Ω, ξ ∈ Rn
with a constant λ0 > 0. A barrier at x0 ∈ ∂Ω is a C2 function w satisfying
Lw ≥ 1 in Ω, w(x0) = 0, and w(x) ≥ 0 on ∂Ω.
a) Prove the comparison principle: If Lu ≥ Lv in Ω and u ≥ v on ∂Ω, then u ≥ v in Ω.
b) Let f ∈ C(Ω) and let u ∈ C2(Ω) ∩ C(Ω) be a solution of
Lu = f in Ω,
u = 0 on ∂Ω.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
68Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Show that there exists a constant C that does not depend on w such that
∣∣∣∣∂u
∂ν(x0)
∣∣∣∣ ≤ C∣∣∣∣∂w
∂ν(x0)
∣∣∣∣
where ν is the outward normal to ∂Ω at x0.
c) Let Ω be convex. Show that there exists a barrier w for all x0 ∈ ∂Ω.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
70Chapter 3: The Heat Equation
3 The Heat Equation
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
3.1 Fundamental Solution 70
3.2 The Non-Homogeneous Heat Equation 74
3.3 Mean-Value Formula for the Heat Equation 76
3.4 Regularity of Solutions 82
3.5 Energy Methods 85
3.6 Backward Heat Equation 86
3.7 Exercises 86
3.1 Fundamental Solution
ut −4u = 0
Evan’s approach: find a solution that is invariant under dilation scaling u(x, t) 7→ λαu(λβx, λt).
Want u(x, t) = λαu(λβx, λt) for all λ > 0 and α, β constant that need to be determined.
Idea: λ = 1t , i.e. find
u(x, t) =1
tαu( xtα, 1)
=1
tαv( xtβ
)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.1 Fundamental Solution71
and let y = xtβ
. Find an equation for v .
∂
∂tu(x, t) =
−αtα+1
v( xtβ
)+
1
tα
n∑
j=1
∂v
∂xj
( xtβ
)· −βxjtβ+1
=−αtα+1
v( xtβ
)− β
tα+1Dxv
( xtβ
)· xtβ
∂
∂xiu(x, t) =
1
tα∂v
∂xi
1
tβ
∂2
∂x2i
u(x, t) =1
tα∂2v
∂xi
1
t2β
ut −4u = 0
=⇒ α
tα+1v(y) +
β
tα+1Dv(y) · y +
1
tα+2β4v = 0
λ = 1t , β = 1
2
αv(y) +1
2Dv · y +4u = 0
Assume: v is radial, v(y) = w(r) = w(|y |)
Diw(|y |) = w ′(|y |) yi|y |
Dv(y) · y = w ′(|y |) |y |2
|y | = w ′(r)r
Equation for w :
αw +1
2w ′(r)r + w ′′ +
n − 1
rw ′ = 0
α = n2 :
n
2w +
1
2w ′r + w ′′ +
n − 1
rw ′ = 0
xrn−1:
n
2rn−1w +
1
2rnw ′
︸ ︷︷ ︸12
(rnw)′
+ rn−1w ′′ + (n − 1)rn−2w ′︸ ︷︷ ︸(rn+1w ′)′
= 0
Integrate:
1
2rnw + rn−1w ′ = a = constant
Assume: rnw → 0, rn−1w ′ → 0 as r →∞, then a = 0 and
rn−1w ′ = −1
2rnw
w ′ = −1
2rw =⇒ w(r) = be−
14r2
v(y) = w(|y |) = be−|y |2
4 y =x√t
u(x, t) =1
tαv(y) =
b
tn2
e−|x |24t
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
72Chapter 3: The Heat Equation
Definition 3.1.1: The function
Φ(x, t) =
1
(4πt)n2e−
|x |24t t > 0
0 t < 0
is called the fundamental solution of the heat equation.
Motivation for choice of b:
Lemma 3.1.2 ():
∫
RnΦ(y , t) dy = 1, t > 0
Proof:
(∫ ∞
−∞e−x
2
dx
)2
=
∫ ∞
−∞
∫ ∞
−∞e−x
2−y2
dydx
=
∫ 2π
0
∫ ∞
0
e−r2
r drdθ
= 2π
(−1
2e−r
2
) ∣∣∣∣∣
∞
0= π
∫
RnΦ(y , t) dy =
1
(4πt)n+2
∫
Rne−
|y |24t dy
Change coordinates: z = y√4t
, dz = 1
(4t)n2dy
∫
RnΦ(y , t) dy =
1
πn+2
∫
Rne−|z |
2
dz = 1.
Initial value problem for the heat equation in Rn: want to solve
ut −4u = 0 in Rn × 0,∞)
u = g on Rn × t = 0
Theorem 3.1.3 (): Suppose g is continuous and bounded on Rn, define
u(x, t) =
∫
RnΦ(x − y , t)g(y) dy
=1
(4πt)n2
∫
Rne−|x−y |2
4t g(y) dy
for x ∈ Rn, t > 0. Then
i.) u ∈ C∞(Rn \ (0,∞))
ii.) ut −4u = 0 in Rn × (0,∞)
iii.) lim(x,t)→(x0,0) u(x, t) = g(x0) if x0 ∈ Rn, x ∈ Rn, t > 0Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.2 The Non-Homogeneous Heat Equation73
Remark
i.) “smoothening”: the solution is C∞ for all positive times, even if g is only continuous.
ii.) Suppose that g > 0, g ≡ 0 outside B(0, 1) ∈ Rn, then u(x, t) > 0 in (Rn × (0,∞)),
“infinite speed of propagation”.
Proof: e−|x−y |2
4t is infinitely many times differentiable
i.) in x, t, the integral exists, and we can differentiate under the integral sign.
ii.) ut −4u = 0 since Φ(y , t) is a solution
iii.) Boundary data:
ε > 0, choose δ > 0 such that |g(x0)−g(x)| < ε for |x−x0| < δ, Then for |x−x0| < δ2
|u(x, t)− g(x0)| =
∣∣∣∣∫
RnΦ(x − y)[g(y)− g(x0)] dy
]
=
∫
B(x0,δ)
Φ(y + x, t)|g(y)− g(x0)| dy
+
∫
Rn\B(x0,δ)
Φ(x − y , t) |g(y)− g(x0)|︸ ︷︷ ︸≤C
dy
|x − x0| < δ2 , |y − x0| > δ
=⇒ (see Green’s function on half space)
12 |x0 − y | ≤ |x − y | Then
1
(4πt)n2
∫
Rn\B(x0,δ)
e−|x−y |2
4t dy ≤ 1
(4πt)n2
∫
Rn\B(x0,δ)
e−|x0−y |2
16t dy
≤ C
(4πt)n2
∫ ∞
0
e−r216t rn−1 dr → 0
as t → 0
|u(x, t) − g(x0)| < 2ε if |x − x0| < δ2 and t small enough, if |(x, t) − (x0, t)| is
small enough.
Formally: Φt −4Φ = 0 in Rn × (0,∞)
Φ as t → 0
∫
RnΦ(x, t) dx = 1 for t > 0.
Formally: Φ(x, 0) = δx , i.e., the total mass is concentrated at x = 0.
3.2 The Non-Homogeneous Heat Equation
Non-homogeneous equation:
ut − ∆u = f in Rn × t > 0u = 0 on Rn × t = 0
(3.1)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
74Chapter 3: The Heat Equation
Rn
Φ(x, t)
3.2.1 Duhamel’s Principle
Φ(x − y , t − s) is a solution of the heat equation. For fixed x > 0, the function
u(x, t; s) =
∫
RnΦ(x − y , t − s)f (y , s) dy
Solves
ut(·; s)− ∆u(·; s) = 0 in Rn × s,∞u(·; s) = f (·, s) on Rn × t = s
Duhamel’s principle: the solution of (3.1) is given by
u(x, t) =
∫ t
0
u(x, t; s) ds ⇐⇒
u(x, t) =
∫ t
0
∫
RnΦ(x − y , t − s)f (y , s) dyds (3.2)
Compare to Duhamel’s principle applied to the transport equation.
Remark: The solution of
ut − ∆u = f in Rn × t > 0u = g on Rn × t = 0
(3.3)
is given by
u(x, t) =
∫
RnΦ(x − y , t)g(y) dy +
∫ t
0
∫
RnΦ(x − y , t − s)f (y , s) dyds
Proof: See proof of following theorem.
Theorem 3.2.1 (): Suppose that f ∈ C21 (Rn × (0,∞)) (i.e. f is twice continuously differ-
entiable in x , f is continuously differentiable in t), f has compact support, then the solution
of the non-homogeneous equation is given by (3.2).
Proof: Change coordinates
u(x, t) =
∫ t
0
∫
RnΦ(y , s)f (x − y , t − s) dyds
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.3 Mean-Value Formula for the Heat Equation75
We can differentiate under the integral sign:
∂u
∂t(x, t) =
∫ t
0
∫
RnΦ(y , s)
∂f
∂t(x − y , t − s) dyds +
∫
RnΦ(y , t)f (x − y , 0) dy
∂2u
∂xi∂xj(x, t) =
∫ t
0
∫
RnΦ(y , s)
∂2f
∂xi∂xj(x − y , t − s) dyds
and all these derivatives are continuous.
ut − ∆u =
∫ t
0
∫
RnΦ(y , s)(∂t − ∆x)f (x − y , t − s) dyds
+
∫
RnΦ(y , t)f (x − y , 0) dy
︸ ︷︷ ︸K
=
∫ t
ε
∫
RnΦ(y , s)(−∂s − ∆y )f (x − y , t − s) dyds
+
∫ ε
0
∫
RnΦ(y , s)(−∂s − ∆y )f (x − y , t − s) dyds +K
= Iε + Jε +K
|Jε| ≤ (sup |∂t f |+ sup |D2f |) ·∫ ε
0
∫
RnΦ(y , s) dy
︸ ︷︷ ︸1
ds
= ε→ 0 as ε→ 0
Iε =
∫ t
ε
∫
Rn(∂s − ∆y )Φ(y , s)︸ ︷︷ ︸
=0
f (x − y , t − s) dyds
−∫
RnΦ(y , t)f (x − y , 0) dy
︸ ︷︷ ︸−K
+
∫
RnΦ(y , ε)f (x − y , t − ε) dy
Collecting the terms:
(ut − ∆u)(x, t) = limε→0
∫
RnΦ(y , ε)f (x − y , t − ε) dy
= f (x, t)
Calculation similar to last class.
3.3 Mean-Value Formula for the Heat Equation
u(x, t) =1
4rn
∫∫
E(x,t;r)
u(y , s)|x − y |2(t − s)2
dyds
Analogy: Laplace ∂B(x, r), B(x, r)
∂B(x, r) = level set of the fundamental solution.
Expectation: level sets of Φ(x, t) relevant.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
76Chapter 3: The Heat Equation
Definition 3.3.1: x ∈ Rn. t ∈ R, r > 0. Define the heat ball
Rn
t
(x, t)
x
E(x, t; r) =
(y , s) ∈ Rn × R
∣∣∣ s ≤ t, Φ(x − y , t − s) ≥ 1
rn
x, t is in the boundary of E(x, t; r)
Definition 3.3.2: 1. Parabolic cylinder: Ω open, bounded set, then
ΩT = Ω× (0, T ]
ΩT includes the top surface
Rn
t
ΩΓT
2. Parabolic boundary: ΓT := ΩT \ΩT = bottom surface + lateral boundary
(This body is the relevant boundary for the maximum principle).
Theorem 3.3.3 (Mean value formula): Ω bounded, open
u ∈ C21 (ΩT ) solution of the heat equation. Then
u(x, t) =1
4rn
∫∫
E(x,t,r)
u(y , s)|x − y |2(t − s)2
dyds
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.3 Mean-Value Formula for the Heat Equation77
∀E(x, t, r) ⊂ ΩT
Proof: Translate in space and time to assume x = 0, t = 0.
E(r) = E(0, 0, r)
φ(r) =1
4rn
∫∫
E(r)
u(y , s)|y |2|s|2 dyds
Idea: compute φ′(r).
E(r):
φ(−y − s) > 1
⇐⇒ 1
(4π(−s))n2
e−|y |2−4s ≥ 1
⇐⇒ 1(
4π−sr2
) n2
e−(y/r)2/(−4(s/r2)) ≥ 1
h(y , s) = (ry , r2s) maps E(1) to E(r).
Change coordinates:
1
4rn
∫∫
E(r)
u(y , s)|y |2s2
dyds =1
4rn
∫∫
h(E(1))
u(y , s)|y |2s2
dyds
[∫
h(Ω)
f (x) dx =
∫
Ω
(f h)(x) · |det(Dh)| dx]
=1
4rn
∫∫
E(1)
u(ry , r2s)|ry |2
(r2s)2rn+2 dyds
=1
4
∫∫
E(1)
u(ry , r2s)|y |2s2
dyds
φ′(r) =1
4
∫∫
E(1)
∑
i
∂u
∂yi(ryi , r
2s) · yi|y |2s2
+∂u
∂s(ry , r2s) · 2r s |y |
2
s2
dyds
=1
4
∫∫
E(r)
Du(y , s) · y
r
(y/r)2
(s/r2)2+∂u
∂s(y , s) · 2r |y/r |
2
s/r2
1
rn+2dyds
=1
4rn+1
∫∫
E(r)
Du(y , s) · y |y |
2
s2+ 2
∂u
∂s(y , s)
|y |2s
dyds
=: A+ B
Heat ball defined by φ(−y ,−s) = 1rn
⇐⇒ 1
(rπ(−s))n2
e−|y |2/(−4s) =
1
rn
Now, we define
ψ(y , s) =n
2ln(−4πs) +
|y |24s
+ n ln r
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
78Chapter 3: The Heat Equation
It’s observed that ψ = 0 on ∂E(0, 0, r) which implies ψ(y , s) = 0 on ∂E(s)
Dψ(y) =y
2s
Rewrite B:
1
4rn+1
∫∫
E(r)
4∂u
∂s(y , s) · y ·Dψ dyds
= 0− 1
4rn+1
∫∫
E(r)
n∑
i=1
(4∂2u
∂s∂yiyiψ + 4
∂u
∂sψ
)dyds
= − 1
4rn+1
∫∫
E(r)
4
n∑
i=1
∂2u
∂s∂yiyiψ + 4n
∂u
∂sψ
dyds
=1
4rn+1
∫∫
E(r)
4
n∑
i=1
∂u
∂yiyi∂ψ
∂s− 4n
∂u
∂sψ
dyds
=1
4rn+1
∫∫
E(r)
4
n∑
i=1
[∂u
∂yiyi
(− n
2s− |y |
2
4s2
)− n∂u
∂sψ
]dyds
=1
4rn+1
∫∫
E(r)
−4n
∂u
∂s· ψ −
n∑
i=1
2n
s
∂u
∂yi· yidyds
Collecting our terms
φ′(r) = A+ B
=1
4rn+1
∫∫
E(r)
−4n∆u · ψ − 2n
s
n∑
i=1
∂u
∂yi· yidyds
=1
4rn+1
∫∫
E(r)
4nDu ·Dψ − 2n
s
n∑
i=1
∂u
∂yi· yidyds
= 0
=⇒ φ is constant
=⇒ φ(r) = limr→0
φ(r)
= u(0, 0) limr→0
1
4rn
∫∫
E(r)
|y |2s2
dyds
︸ ︷︷ ︸1
= u(0, 0)
Recall:
MVP :1
4rn
∫∫
E(x,t,r)
u(y , s)|x − y |2(t − s)2
dyds = u(x, t)
1
4rn
∫∫
E(x,t,r)
|x − y |2(t − s)2
dyds = 1
Theorem 3.3.4 (Maximum principle for the heat equation on bounded and open do-
mains): u ∈ C21 (ΩT ) ∩ C(ΩT ) solves the heat equation in ΩT . Then
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.3 Mean-Value Formula for the Heat Equation79
i.)
maxΩT
u = maxΓT
u max. principle
ii.) Ω connected, ∃(x0, t0) ∈ ΩT such that u(x0, t0) = maxΩTu then u is constant on ΩT
(strong max. principle).
Remarks:
1. In ii.) we can only conclude for t ≤ t0.
2. Since −u is also a solution, we have the same assertion with max replaced by min.
3. Heat conducting bar:
ΓT
T
Ω
ΩT
No heat concentration in the interior of the bar for t > 0.
Proof: Suppose u(x0, t0) = M = maxΩTu, with x0, t0 /∈ ΓT . For r > 0 small enough,
E(x0, t0, r) ⊂ ΩT . By the MVT:
M = u(x0, t0) =1
4rn
∫∫
E(x0,t0,r)
u(y , r)|x − y |2(t − s)2
dydS
≤ M
Suppose (y0, s0) ∈ ΩT , s0 < t0, line segment L connecting (y0, s0) and (x0, t0) in ΩT .
Rn
s0
t0
t
(y0, s0)
u(x0, t0) = M(x0, t0)
Define r0 as
mins ≥ s0
∣∣∣ u(x, t) = M, ∀(x, t) ∈ L, s ≤ t ≤ t0
= r0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
80Chapter 3: The Heat Equation
x0
x
Either r0 = s0 =⇒ u(y0, s0) = u(x0, t0) or r0 > s0. Then for r small enough, E(u0, s0, r) ⊆ΩT . Argument above =⇒ u ≡ constant on E(u0, s0, r). E(y0, s0, t) contains a part of L =⇒contradiction with definition of r0.
=⇒ u ≡ M for all points (y0, s0) connected to (x0, t0) by line segments.
General Case: (x, t) ∈ ΩT , t < t0, Ω connected.
Choose a polygonal path x0, x1, . . . , xn = x in Ω.
t0 > t1 > · · · > tn = t
=⇒ segments (xi , ti) + (xi+1, ti+1) in ΩT
=⇒ u(x, t) = u(x0, t0).
Theorem 3.3.5 (Uniqueness of solutions on bounded domains): If g ∈ C(ΓT ), f ∈ C(ΩT )
then there exists at most one solution to
ut − ∆u = f in ΩT
u = g on ΓT
Proof: If u1 and u2 are solutions then v = ±(u1 − u2) solve
vt − ∆v = 0 in ΩT
u = 0 on ΓT
Now we use the maximum principle.
Theorem 3.3.6 (Maximum principle for solutions of the Cauchy problem):
ut − ∆u = 0 in Rn × (0, T )
u = g on Rn × t = 0
Suppose u ∈ C21 (Rn × (0, T )) ∩ C(Rn × [0, T ]) is a solution of the heat equation with
u(x, t) ≤ Aea|x |2 for x ∈ Rn, t ∈ [0, T ], and A, a > 0 constants. Then
supRn×[0,T ]
u = supRnu = sup
Rng
Remark: Theorem not true without the exponential bound, [John].
Proof: Suppose that 4aT < 1, 4a(T + ε) < 1, ε > 0 small enough, a < 14(T+ε) , there
exists γ > 0 such that
a + γ =1
4(T + ε)Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.3 Mean-Value Formula for the Heat Equation81
Fix y ∈ Rn, µ > 0 and define
v(x, t) = u(x, t)− µ
(T + ε− t) n2e|x−y |2
4(T+ε−t) x ∈ Rn, t > 0
Check: v is a solution of the heat equation since since Φ(x, t) is a solution.
|x − y | = r |x | ≤ |y |+ r
y rr
T
Ω = B(y , r) r > 0
ΩT = B(y , r)× (0, T ]
Use the maximum principle for v on the bounded set ΩT .
maxΩT
v ≤ maxΓT
v
on the bottom:
v(x, 0) = u(x, 0)− µ
(T + ε)n2
e|x−y |24(T+ε) x ∈ Rn
︸ ︷︷ ︸≥0
≤ u(x, 0)
≤ supRng
v(x, t) = u(x, t)− µ
(T + ε− t) n2e|x−y |2
4(T+ε−t) x ∈ Rn
≤ Aea|x |2 − µ
(T + ε)n2
er2
4(T+ε) x ∈ Rn
≤ Aea(|y |+r)2 − µ
(T + ε)n2
er2
4(T+ε) x ∈ Rn[a + γ =
1
4(T + ε)
1
T + ε= 4(a + γ)
]
= Aea(|y |+r)2 − µ(4(a + γ))n2 e(a+γ)r2
x ∈ Rn
≤ supRng if we choose r big enough.
Leading order term e(a+γ)r2dominates ear
2for r large enough. If r big enough, then
maxΩT
v ≤ supRng =⇒ max
Rn×[0,T ]v ≤ sup
Rng
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
82Chapter 3: The Heat Equation
Now µ→ 0 and get
maxRn×[0,T ]
u ≤ supRng
If 4aT > 1, choose t0 > 0 such that 4at0 < 1, apply above argument on [0, t0], [t0, 2t0], . . .
Theorem 3.3.7 (Uniqueness of solutions for the Cauchy problem): There exists at most
one solution u ∈ C21 (Rn × (0, T ]) ∩ C(Rn × [0, T ]) of
ut − ∆u = f in Rn × (0, T ]
u = g on Rn × t = 0
which satisfies that
|u(x, t)| ≤ Aea|x |2 x ∈ Rn, t ∈ [0, T ]
Proof: Suppose u1 and u2 are solutions. Define w = ±(u1 − u2) solves
wt − ∆w = 0 in Rn × (0, T ]
w = 0 on Rn × t = 0
|w(x, t)| ≤ 2Aea|x |2
=⇒ max principle applies
=⇒ w = 0
=⇒ uniqueness.
3.4 Regularity of Solutions
Theorem 3.4.1 (): Ω open, u ∈ C21 (ΩT ), is a solution of the heat equation. Then u ∈
C∞(ΩT ).
Proof: Regularity is a local issue.
Parabolic cylinders
C(x, t, r) =
(y , s) ∈ Rn × R∣∣∣ |x − y | ≤ r, t − r2 ≤ s
Fix (x0, t0) ∈ ΩT
v(x, t) = u(x, t) · ξ(x, t)
ξ cut-off function.
0 ≤ ξ on ΩT
ξ ≡ 1 on C
(x0, t0,
3
4r
)
ξ ≡ 0 outside C, i.e. on ΩT \ Cξ ∈ C∞(ΩT )
Goal: Representation for u:
u(x, t) =
∫∫K(x, y , t, s)u(y , s) dyds
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.4 Regularity of Solutions83
for r small enough
(x0, t0)
C(x0, t0,
r2
)= C ′′
C(x0, t0,
34r)= C ′
C (x0, t0, r) = C
C(x0, t0, r) ⊂ ΩT
with a smooth kernel =⇒ u as good as K.
vt = utξ + uξt
Dv = Du · ξ + uDξ
∆v = ∆u · ξ + sDu ·Dξ + u∆ξ
vt − ∆u = uξt − 2Du ·Dξ − u∆ξ = f
A solution v of this equation is given by Duhamel’s principle,
v(x, t) =
∫ t
0
∫
RnΦ(x − y , t − s)f (y , s) dyds
Uniqueness for the Cauchy problem
v(x, t) =
∫ t
0
∫
RnΦ(x − y , t − s)f (y , s) dyds
Suppose for the moment being that u is smooth.
(x, t) ∈ C′′, ξ(x, t) = 1
=⇒ v(x, t) = u(x, t)ξ(x, t) = u(x, t)
u(x, t) =
∫ t
0
∫
Rn︸ ︷︷ ︸C\C′
Φ(x − y , t − s)[uξs − 2Du ·Dξ − u∆ξ](y , s) dyds
=
∫∫
C\C′Φ(x − y , t − s)[uξs − u∆ξ + 2u∆ξ] dyds
+
∫∫
C\C′DΦ(x − y , t − s)2uDξ dyds
=
∫∫
C\C′K(x, y , t, s)u(y , s) dyds
with
K(x, y , t, s) = [ξs + ∆ξ](y , s)Φ(x − y , t − s) + 2Dξ ·DΦ(x − y , t − s)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
84Chapter 3: The Heat Equation
We don’t see the singularity of Φ, and K is C∞
=⇒ representation of u,
=⇒ u is as good as K
=⇒ u is C∞.
3.4.1 Application of the Representation to Local Estimates
Theorem 3.4.2 (local bounds on derivatives): u ∈ C21 (ΩT ) solution of the heat equation:
maxC(x,t, r2 )
|DkxDltu| ≤Ckl
r k+2l+n+2
∫
C(x,t,r)
|u(y , s)| dyds
Proof: Scaling argument. Calculation on a cylinder with r = 1.
v(x, t) = u(x0 + rx, t0 + r2t)
1
(x0, t0) (0, 0)
C (x0, t0, r)
r
r 2
C(0, 0, 1)
1
v solves the hear equations since
vt(x, t) = r2ut(x0 + rx, t0 + r2t)
∆v(x, t) = r2∆u(x0 + rx, t0 + r2t)
Representation of v :
v(x, t) =
∫∫
C(0,0,1)
K(x, y , t, s)v(y , s) dyds
x, t ∈ C(
0, 0, 12
)
|DkxDltu| ≤∫∫
C(0,0,1)
|DkxDltK(x, y , t, s)|︸ ︷︷ ︸Ckl
|v(y , s)| dyds
= Ckl
∫∫
C(0,0,1)
|v(y , s)| dyds
=⇒ max(x,t)∈C(0,0, 1
2 )|DkxDltv | ≤ Ckl
∫∫
C(0,0,1)
|v(y , s)| dyds
=⇒ r k+2l max(x,t)∈C(x0,t0,
r2 )|DkxDltu| ≤
Cklrn+2
∫∫
C(x0,t0,r)
|u(y , s)| dyds
=⇒ max(x,t)∈C(x0,t0,
r2 )|DkxDltu| ≤
Cklr k+2l+n+2
∫∫
C(x0,t0,r)
|u(y , s)| dyds
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.5 Energy Methods85
Remark: This reflects again the different scaling in x and t.
3.5 Energy Methods
Theorem 3.5.1 (Uniqueness of solutions): Ω open and bounded, there exists at most one
solution u ∈ C21 (ΩT ) of
ut − ∆u = f in ΩT
u = g on Ω× t = 0u = h on ∂Ω× (0, T ]
Proof: Suppose you have two solutions u1, u2. Let w = u1 − u2. w solves the homoge-
neous heat equations.
Let
e(t) =
∫
Ω
w2(x, t) dx, 0 ≤ t ≤ T
d
dte(t) =
∫
Ω
2w(x, t) · wt(x, t) dx
= 2
∫
Ω
w(x, t) · ∆w(x, t) dx
I.P.= −2
∫
Ω
Dw(x, t) ·Dw(x, t) dx + 0
= −2
∫
Ω
|Dw(x, t)|2 dx ≤ 0
=⇒ e(t) is decreasing in t.
e(0) =
∫
Ω
w2(x, 0) dx = 0
=⇒ e(t) ≡ 0 on [0, T ]
=⇒ Dx(x, t) = 0 ∀x, t=⇒ w(x, t) = constant
∴ w(x, t) = 0 x ∈ ∂Ω
=⇒ w(x, t) = 0 ∀x ∈ Ω,∀t ∈ [0, T ]
=⇒ u1 = u2 in ΩT .
3.6 Backward Heat Equation
ut − ∆u = 0 in ΩT
u(x, 0) = g(x) in Ω
Backwards heat equation:
ut − ∆ = 0 in ΩT
u(x, T ) = g(x) in Ω
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
86Chapter 3: The Heat Equation
Ω
T
Reverses the smoothening, expect bad solutions in finite time even if g(t) is smooth. Trans-
form: t 7→ T − t
v(x, t) = u(x, T − t)vt(x, t) = −ut(x, T − t)
∆v(x, t) = ∆u(x, T − t)
So, ut − ∆u = 0 transforms to
vt + ∆v = 0 Backwards heat equation
Theorem 3.6.1 (): If u1 and u2 are solutions of the heat equation with u1(x, T ) = u2(x, T )
with the same boundary values, then the solutions are identical.
3.7 Exercises
3.1: Apply the similarity method to the linear transport equation
ut + aux = 0
to obtain the special solutions of the form u(x, t) = c(at − x)−α.
3.2: [Evans 2.5 #10] Suppose that u is smooth and that u solves ut−∆u = 0 in Rn×(0,∞).
a) Show that uλ(x, t) = u(λx, λ2t) also solves the heat equation for each λ ∈ R.
b) Use a) to show that v(x, t) = x · Du(x, t) + 2tut(x, t) solves the heat equation as
well. The point of this problem is to use part a).
3.3: [Evans 2.5 #11] Assume that n = 1 and that u has the special form u(x, t) = v(x2/t).
a) Show that
ut = uxx
if and only if
4zv ′′(z) + (2 + z)v ′(z) = 0 for z > 0.
b) Show that the general solution of the ODE in a) is given by
v(z) = c
∫ z
0
e−s2/4s−1/2 ds + d, with c, d ∈ R.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
3.7 Exercises87
c) Differentiate the solution v(x2/t) with respect to x and select the constant c properly,
so as to obtain the fundamental solution Φ for n = 1.
3.4: [Evans 2.5 #12] Write down an explicit formula for a solution of the initial value problem
ut − ∆u + cu = f in Rn × (0,∞),
u = g on Rn × t = 0,
where c ∈ R.
Hint: Make the change of variables u(x, t) = eatv(x, t) with a suitable constant a.
3.5: [Evans 2.5 #13] Let g : [0,∞) → R be a smooth function with g(0) = 0. Show
that a solution of the initial-boundary value problem
ut − uxx = 0 in R+ × (0,∞),
u = 0 on R+ × t = 0,u = g on x = 0 × [0,∞),
is given by
u(x, t) =x√4π
∫ t
0
1
(t − s)3/2e−x
2/(t−s)g(s) ds.
Hint: Let v(x, t) = u(x, t)− g(t) and extend v to x < 0 by odd reflection.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
88Chapter 4: The Wave Equation
4 The Wave Equation
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
4.1 D’Alembert’s Representation of Solution in 1D 88
4.2 Representations of Solutions for n = 2 and n = 3 93
4.3 Solutions of the Non-Homogeneous Wave Equation 98
4.4 Energy methods 101
4.5 Exercises 103
4.1 D’Alembert’s Representation of Solution in 1D
utt − ∆u = 0 in ΩT homogeneous
utt − ∆u = f in ΩT non-homogeneous
u = g on Ω× t = 0ut = h on Ω× t = 0
utt − uxx = 0
∂2t u − ∂2
xu = 0
(∂2t − ∂2
x
)u = 0
(∂t + ∂x) (∂t − ∂x) u︸ ︷︷ ︸v(x,t)
= 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.1 D’Alembert’s Representation of Solution in 1D89
Solve (∂t + ∂x)v = vt + vx = 0
Transport equation for v : v(x, t) = a(x − t) an arbitrary function.
Thus ∂tu − ∂xu = v(x, t) = a(x − t).
General solution formula forut + bux = f (x, t) in Rn × (0,∞)
u = g on Rn × t = 0
Solution (Duhamel’s principle):
u(x, t) = g(x − tb) +
∫ t
0
f (x + (s − t)b, s) ds
Solution u (b = −1, f (x, t) = a(x − t))
u(x, t) = g(x − t(−1)) +
∫ t
0
a(x + (s − t)(−1)− s) ds
= g(x + t) +
∫ t
0
a (x + t − 2s)︸ ︷︷ ︸y
ds
a(x) = v(x, 0) = ∂tu(x, 0)− ∂xu(x, 0)
= h(x)− g′(x)
u(x, t) = g(x + t) +−1
2
∫ x−t
x+t
a(y) dy
= g(x + t) +1
2
∫ x+t
x−ta(y) dy
= g(x + t) +1
2
∫ x+t
x−t(h(y)− g′(y)) dy
=⇒ u(x, t) =1
2(g(x + t) + g(x − t)) +
1
2
∫ x+t
x−th(y) dy
D’Alembert’s formula.
Theorem 4.1.1 (D’Alembert’s solution for the wave equation in 1D): g ∈ C2(R), h ∈C1(R)
u(x, t) =1
2(g(x + t) + g(x − t)) +
1
2
∫ x+t
x−th(y) dy
Then
i.) u ∈ C2(R× (0,∞))
ii.)utt − uxx = 0
u(x, 0) = g(x)
ut(x, 0) = h(x)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
90Chapter 4: The Wave Equation
x − tt > 0
t = 0
information at x0travels with finitespeedc.f heat equationinfinite speed ofpropagation
x + t
x0
t > 0
t = 0
(x, t)
x − t x + t
Remark: We say u(x, 0) = g(x), if lim(x,t)→(x0,0),t>0 u(x, t) = g(x0) ∀x0 ∈ R. Remarks:
• finite speed of propagation.
• domain of influence
The values of u depend only on the value of g and h on [x − t, x + t]
4.1.1 Reflection Argument
u = 0
t
g, h
x = 0
Want to solve
utt − uxx = 0 in R+ × (0,∞)
u = g on R+ × t = 0ut = h on R+ × t = 0u = 0 on 0 × (0,∞)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.1 D’Alembert’s Representation of Solution in 1D91
and suppose that g(0) = 0, h(0) = 0. Try to find a solution u on R of a wave equation that
is odd as a function of x , i.e.
u(−x, t) = −u(x, t)
=⇒ u(0, t) = −u(0, t)
=⇒ u(0, t) = 0
Extend g and h by odd reflection:
g(x) =
−g(−x) x < 0
g(x) x ≥ 0
h(x) =
−h(−x) x < 0
h(x) x ≥ 0
g
u solution of:utt − uxx = 0 in R× (0,∞)
u = g on R× t = 0ut = h on R× t = 0
D’Alembert’s formula:
u(x, t) =1
2(g(x − t) + g(x + t)) +
1
2
∫ x+t
x−th(y) dy
Assertion: u is odd, the restriction to [0,∞) is our solution u.
Proof:
u(−x, t) =1
2[g(−x − t) + g(−x + t)] +
1
2
∫ −x+t
−x−th(y) dy
= −1
2[g(x + t) + g(x − t)]− 1
2
∫ −x+t
−x−th(−y) dy
(sub. z = −y) = −1
2[g(x + t) + g(x − t)] +
1
2
∫ x−t
x+t
h(z) dz
= −1
2[g(x + t) + g(x − t)]− 1
2
∫ x+t
x−th(y) dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
92Chapter 4: The Wave Equation
=⇒ u(−x, t) = −u(x, t)
=⇒ u is odd.
u(x, t) =
12 [g(x − t) + g(x + t)] + 1
2
∫ x+tx−t h(y) dy x ≥ t ≥ 0
12 [g(x + t)− g(t − x)] + 1
2
∫ t+xt−x h(y) dy t ≥ x ≥ 0
∫ x+t
x−th(y) dy =
∫ 0
x−th(y) dy +
∫ x+t
0
h(y) dy
=
∫ 0
x−t−h(−y) dy +
∫ x+t
x
h(y) dy
(sub. z = −y) =
∫ 0
−x+t
h(z) dz +
∫ x+t
x
h(y) dy
=
∫ x+t
t−xh(y) dy
4.1.2 Geometric Interpretation (h ≡ 0)
t > 0
0
0
0
g(x)
t = 0
Remark: D’Alembert:
u(x, t) =1
2[g(x + t) + g(x − t)] +
1
2
∫ x+t
x−th(y) dy
The solution is generally of the form u(x, t) = F (x − t) + G(x + t) with arbitrary functions
F and G.Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.2 Representations of Solutions for n = 2 and n = 393
4.2 Representations of Solutions for n = 2 and n = 3
4.2.1 Spherical Averages and the Euler-Poisson-Darboux Equation
U(x ; t, r) = −∫
∂B(x,r)
u(y , t) dS(y)
G(x, r) = −∫
∂B(x,r)
g(y) dS(y)
H(x, r) = −∫
∂B(x,r)
h(y) dS(y)
Recover u(x, t) = limr→0 U(x ; t, r)
x ∈ Rn fixed from now on.
Lemma 4.2.1 (): u a solution of the wave equation, then U satisfies
Utt − Ur r − n−1
r Ur = 0 in R+ × (0,∞)
U = G on R+ × t = 0Ut = H on R+ × t = 0
(4.1)
Once this is established, we transform (4.1) into Utt − Ur r = 0 and we use the represen-
tation on the half line to find U, U, u.
Calculation from MVP for harmonic functions:
Ur (x, t, r) =r
n−∫
B(x,r)
∆u(y , t) dy
=r
n
1
α(n)rn
∫
B(x,r)
∆u(y , t) dy
=1
nα(n)
1
rn−1
∫
B(x,r)
∆u(y , t) dy
∴ rn−1Ur =1
nα(n)
∫
B(x,r)
∆u(y , t) dy
(since u solves wave eq.) =1
nα(n)
∫
B(x,r)
utt(y , t) dy
(in polar) =1
nα(n)
∫ r
0
∫
∂B(x,ρ)
utt dSdρ
∴ [rn−1Ur ]r =1
nα(n)
∫
∂B(x,r)
utt dS
= rn−1−∫
∂B(x,r)
utt dS
= rn−1Utt
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
94Chapter 4: The Wave Equation
∴ Utt −1
rn−1[rn−1Ur ]r = 0
⇐⇒ Utt −1
rn−1[(n − 1)rn−2Ur + rn−1Ur r ] = 0
⇐⇒ Utt −n − 1
rUr − Ur r = 0
Euler-Poisson-Darboux Equation.
4.2.2 Kirchoff’s Formula in 3D
Recall:
Representations forutt = ∆u in Rn × (0,∞)
u = g on Rn × t = 0ut = h on Rn × t = 0
n = 3 Kirchhoff’s formula
U(x, t, r) = −∫
∂B(x,r)
u(y , t) dS(y)
G(x, r), H(x, r)
U satisfies Euler-Poisson-Darboux equation
Utt − Ur r −n − 1
rUr = 0
U = G, Ut = H
Transformation: U = rU
Utt = rUtt = r [Ur r +2
rUr ]
= rUr r + 2Ur
= [U + rUr ]r
Ur r = [rU]r r = [U + rUr ]r
=⇒
Utt − Ur r = 0 in R+ × (0,∞)
U = G at t = 0
Ut = H at t = 0
U(0, t) = 0 t > 0
1D wave equation on the half line with Dirichlet condition at x = 0. From D’Alembert’s
solution in 1D:
u(x, t) = limr→0
U(x, t, r)
r
(formula with 0 ≤ r < t) = limr→0
1
2r[G(t + r) + G(t − r)] +
1
2r
∫ t+r
t−rH(y) dy
= G′(t) + H(t)Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.2 Representations of Solutions for n = 2 and n = 395
By definition
G(x ; t) = tG(x ; t)
= t−∫
∂B(x,t)
g(y) dS(y)
= t−∫
∂B(0,1)
g(x + tz) dS(z)
G′(x, t) = −∫
∂B(0,1)
g(x + tz) dS(z) + t−∫
∂B(0,1)
Dg(x + tz) · z dS(z)
= −∫
∂B(x,t)
[g(y) + t
∂y
∂ν(y)
]dS(y)
H(t) = t ·H(t) = t−∫
∂B(x,t)
h(y) dy
u(x, t) = −∫
∂B(x,t)
[g(y) + t
∂g
∂ν(y) + t · h(y)
]dS(y)
This is Kirchhoff’s Formula
Loss of regularity: need g ∈ C3, h ∈ C2, to get u in C2.
Finite speed of propagation:
B(x, t)
4.2.3 Poisson’s Formula in 2D
Method of descent: use the 3D solution to get the 2D solution by extending in 2D to 3D
u solves
utt − ∆u = 0 in R2 × (0,∞)
u = g on R2 × t = 0ut = h on R2 × t = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
96Chapter 4: The Wave Equation
Define
u(x1, x2, x3, t) = u(x1, x2, t)
g(x1, x2, x3) = g(x1, x2)
h(x1, x2, x3) = h(x1, x2)
Thenutt − ∆u = 0 in R3 × (0,∞)
u = g on R3 × t = 0ut = h on R3 × t = 0
Kirchhoff’s formula: x = (x1, x2, x3)
u(x, t) = −∫
∂B(x,t)
[g(y) + t
∂g
∂ν(y) + t · h(y)
]dS(y)
B(x, t) ⊂ R2x1
x2
x3
S−
S+B(x, t) ⊂ R3
tx
u(x1, x2) = u(x1, x2, 0)
Assume x = (x1, x2, 0)
Need to evaluate∫∂B(x,t)
|y − x |2 = t2 equation for the surface
⇐⇒ (y1 − x1)2 + (y2 − x2)2 + y23 = t2
Parametrization:
y23 = t2 − (y1 − x1)2 − (y2 − x2)2
= t2 − |y − x |2
Parametrization: f (y1, y2) =√t2 − |y − x |2
dS =√
1 + |Df |2 dy
∂f
∂yi=
1
2
1√t2 − |y − x |2
(−2)(yi − xi)
=−(yi − xi)√t2 − |y − x |2
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.2 Representations of Solutions for n = 2 and n = 397
dS =
√1 +
|y − x |2t2 − |y − x |2 dy
=
(t2
t2 − |y − x |2) 1
2
dy
dS =t√
t2 − |y − x |2dy
4.2.4 Representation Formulas in 2D/3D
• Kirchoff’s formula
u(x, t) = −∫
∂B(x,t)
[g + t
∂y
∂ν+ t · h
]dS(y)
• Poisson’s formula u u
x = (x1, x2, 0)
u(x) = −∫
∂B(x,t)
[g + t
∂g
∂ν+ t · h
]dS(y)
B(x, t) ⊂ R2x1
x2
x3
S−
S+B(x, t) ⊂ R3
tx
Parameterize the surface integral
S+, f (y1, y2) =√t2 − |x − y |2
dS =√
1 + |Df |2 dy=
t√t2 − |y − x |2
∫
S+
[g + t
∂g
∂ν+ t · h
](y) dS(y)
=
∫
B(x,t)
[g + t
∂g
∂ν+ t · h
](y1, y2, f (y1, y2)) ·
√t + |Df |2 dy
=
∫
B(x,t)
[g(y1, y2) + t
Dg · (y − x)
t+ t · h(y1, y2)
]t dy√
t2 − |y − x |2
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
98Chapter 4: The Wave Equation
∂g
∂ν= Dyg(y) · y − x
t
= Dyg ·y − xt
u(x) = −∫
∂B(x,t)
[g + t
∂g
∂ν+ t · h
]dS(y)
Since g, h do not appear on
u(x, t) = u(x, t)
=2
4πt2·∫
B(x,t)
[g(y1, y2) +Dg(y1, y2) · (y − x) + t · h(y1, y2)]t dy√
t2 − |x − y |2
=1
2−∫
B(x,t)
t · g(y) + t ·Dg(y) · (y − x) + t2h(y)√t2 − |x − y |2
dy x ∈ Rn, t > 0
Poisson’s formula in 2D.
u(x1, x2, x3, t) = u(x1, x2, t)
u(x1, x2, 0, t) = u(x1, x2, t)
u(x, t) = u(x, t)
Remarks
• Used method of descent extend 2D to 3D. The same idea works in n-dimensions,
extend from even to odd dimensions.
• Qualitative difference: integration is on the full ball in 2D.
• Loss of regularity:
need g ∈ C31 , h ∈ C2 to have u ∈ C2.
4.3 Solutions of the Non-Homogeneous Wave Equation
utt − ∆u = f (x, t) in Rn × (0,∞)
u = 0 on Rn × t = 0ut = 0 on Rn × t = 0
The fully non-homogeneous wave equationutt − ∆u = f
u = g
ut = h
can be solved by u = v + w , wherevtt − ∆v = f
v = 0
vt = 0vtt − ∆v = 0
w = f
wt = g
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.3 Solutions of the Non-Homogeneous Wave Equation99
4.3.1 Duhamel’s Principle
u(x, t; s)
utt(·; s)− ∆u(·; s) = 0 on Rn × (x,∞)
u(·; s) = 0 on Rn × t = s (u(x, t, 0) = 0)
ut(·; s) = f (·, s) on Rn × t = s (ut(x, t, 0) = f (·, t))
Then
u(x, t) =
∫ t
0
u(x, t; s) ds, x ∈ Rn, t > 0
Proof: We differentiate
ut(x, t) = u(x, t; t) +
∫ t
0
ut(x, t; s) ds
=
∫ t
0
ut(x, t; s) ds
utt(x, t) = ut(x, t; t) +
∫ t
0
utt(x, t; s) ds
= f (x, t) +
∫ t
0
utt(x, t; s) ds
∆u(x, t) =
∫ t
0
∆u(x, t; s) ds
=
∫ t
0
utt(x, t; s) ds
=⇒ utt(x, t)− ∆u(x, t) = f (x, t).
Examples: 1D: D’Alembert’s
Formula:
u(x, t) =1
2[g(x + t) + g(x − t)] +
1
2
∫ x+t
x−th(y) dy
To get the solution for u(x, t; s) translate the origin from 0 to s.
u(x, t; s) =1
2[g(x + t − s) + g(x − (t − s))] +
1
2
∫ x+t−s
x−(t−s)
h(y) dy
Recall that for u(x, t; s) we have g ≡ 0, h(y) = f (y , s).
Duhamel’s principle:
u(x, t) =
∫ t
0
1
2
∫ x+(t−s)
x−(t−s)
f (y , s) dyds
(sub. u = t − s) =1
2
∫ u
x−uf (y , t − u) dy(−du)
=1
2
∫ t
0
∫ x+s
x−sf (y , t − s) dyds
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
100Chapter 4: The Wave Equation
s = 0
t
x
u(x, t)
x − t x + t
Which values of f influence u(x, t)
Kirchhoff’s formula (3D)
u(x, t) = −∫
∂B(x,t)
(g + t
∂g
∂ν+ t · h
)dS(y)
Poisson’s formula in 2D
u(x, t) =1
2−∫
B(x,t)
t · g(y) + t ·Dg · (y − x) + t2h(y)√t2 − |y − x |2
dy
Non-homogeneous problems
utt − ∆u = f
u = 0
ut = 0
u(x, t) =
∫ t
0
u(x, t; s) ds
u(·; s) solution on Rn × (s,∞)
u(·, s) = 0
ut(·, s) = f (·, s)
4.3.2 Solution in 3D
Initial data g(y) = s, h(y) = f (y , s). Translate the origin from t = 0 to t = s
u(x, t; s) = (t − s)−∫
∂B(x,t−s)
f (y , s) dS(y)
u(x, t) =
∫ t
0
(t − s)−∫
∂B(x,t−s)
f (y , s) dS(y)ds
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.4 Energy methods101
(r = t − s)
=
∫ 0
t
r−∫
∂B(x,r)
f (y , t − r) dS(y)(−dr)
=
∫ t
0
r−∫
∂B(x,t)
f (y , t − r) dS(y)dr
=
∫ t
0
r
4πr2
∫
∂B(x,r)
f (y , t − r) dS(y)dr
=
∫
B(x,t)
1
4π|x − y | f (y , t − |x − y |) dy
= u(x, t) for x ∈ R3, t > 0
4.4 Energy methods
Ω ⊂ Rn bounded and open, ΩT = Ω× (0, T )
Theorem 4.4.1 (uniqueness): There is at most one smooth solution of
utt − ∆u = 0 in ΩT
u = g in Ω× t = 0ut = h in Ω× t = 0u = k on ∂Ω× (0, t)
Proof: Suppose u1, and u2 are solutions.
w = u1 − u2 satisfies
wtt − ∆w = 0 in ΩT
w = 0 in Ω× t = 0wt = 0 in Ω× t = 0w = 0 on ∂Ω× (0, t)
Multiply by wt :
wttwt − (∆w)wt = 0 inΩT∫
Ω
(wtt(x, t)wt(x, t) + ∆w(x, t) · wt(x, t)) dx = 0
=
∫
Ω
(wtt(x, t)wt(x, t) +Dw(x, t) ·Dwt(x, t)) dx
+
∫
∂Ω
(Dw · ν)wt dS
=
∫
Ω
(1
2∂t |wt |2 +
1
2∂t |Dw |2
)dx
Integrate from t = 0, to t = T
0 =
∫ T
0
∂t
∫
Ω
(1
2|wt |2 +
1
2|Dw |2
)dxdt
=1
2
∫
Ω
(|wt(x, T )|2 + |Dw(x, T )|2
)dx
−1
2
∫
Ω
(|wt(x, 0)|2 + |Dw(x, 0)|2
)dx = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
102Chapter 4: The Wave Equation
=⇒ wt , Dw = 0 in ΩT
=⇒ w = constant =⇒ w = 0
=⇒ u1 = u2 in ΩT .
Alternative: Define
e(t) =1
2
∫
Ω
(|wt(x, t)|+ |Dw(x, t)|) dx
and show that e(t) = 0, t ≥ 0.
4.4.1 Finite Speed of Propagation
(x0, t)
t
x0t0
B(x0, t0) ⊂ Rn
u(x, t0)
C = (x, t) : 0 ≤ t ≤ t0, |x − x0| ≤ t0 − t
If u is a solution of the wave equation with u ≡ 0, ut = 0 in B(x0, t0), then u ≡ 0 in C and
in particular u(x0, t0) = 0.
Define: e(t) =
∫
B(x0,t0−t)
(u2t (x, t) + |Du(x, t)|2
)dx
Compute: e(t) =
∫
B(x0,t0−t)(2ututt + 2Du ·Dut) dx
−∫
∂B(x0,t0−t)(u2t + |Du|2) dS
I.P.=
∫
B(x0,t0−t)(2ututt − 2∆u · ut) dx
+
∫
∂B(x0,t0−t)2(Du · ν)ut dS −
∫
∂B(x0,t0−t)(u2t + |Du|2) dS
= 0 +
∫
∂B(x0,t0−t)(2(Du · ν)ut − u2
t − |Du|2) dS
Gregory T. von Nessi
Versio
n::3
11
02
01
1
4.5 Exercises103
|Du · ν|C-S≤ |Du| · |ν|︸︷︷︸=1
= |Du|
|2(Du · ν)ut | ≤ 2|Du| · |ut |Young’s≤ |Du|2 + |ut |2
Young’s inequality: 2ab ≤ a2 + b2, a, b ∈ R. Comes from the fact 0 ≤ a2 − 2ab + b2 =
(a − b)2.
∴ e(t) ≤∫
B(x0,t0−t)(|Du|2 + |ut |2 − u2
t − |Du|2) dx ≤ 0
=⇒ e(t) is decreasing in time.
But,
e(0) =
∫
B(x0,t0)
(|ut |2 + |Du|2) dx = 0
=⇒ e(t) = 0, t ≥ 0
=⇒ ut(x, t), Bu(x, t) = 0, (x, t) ∈ C=⇒ u ≡ 0 in C
=⇒ u(x0, t0) = 0.
4.5 Exercises
4.1: [Evans 2.6 #16] Suppose that E = (E1, E2, E3) and B = (B1, B2, B3) are a solution
of Maxwell’s equations
Et = curlB, Bt = − curlE, divE = 0, divB = 0.
Show that
utt − ∆u = 0
where u = E i or u = Bi .
4.2: [Evans 2.6 #17] (Equipartition of energy) Suppose that u ∈ C2(R × [0,∞)) is a
solution of the initial value problem for the wave equation in one dimension,utt − uxx = 0 in R× (0,∞),
u = g on R× t = 0,ut = h on R× t = 0.
Suppose that g and h have compact support. The kinetic energy is given by
k(t) =
∫ ∞
−∞u2t (x, t) dx
and the potential energy by
p(t) =
∫ ∞
−∞u2x (x, t) dx.
Prove that
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
104Chapter 4: The Wave Equation
a) the total energy, e(t) = k(t) + p(t), is constant in t,
b) k(t) = p(t) for all large enough times t.
4.3: [Evans 2.6 #18] Let u be a solution ofutt − uxx = 0 in R3 × (0,∞),
u = g on R3 × t = 0,ut = h on R3 × t = 0.
where g and h are smooth and have compact support. Show that there exists a constant C
such that
|u(x, t)| ≤ C
t, x ∈ R3, t > 0
4.4: [Qualifying exam 01/01] Let Ω be a bounded domain in Rn with smooth boundary and
suppose that g : Ω → R is smooth and positive. Given f1, f2 : Ω → R, show that there can
be at most one C2(Ω) solution u : Ω× (0,∞)→ R to the initial value problem
utt = g(x)∆u in Ω× (0,∞),
u = f1 on Ω× t = 0,ut = f2 on Ω× t = 0,u = 0 on ∂Ω× (0,∞).
Hint: Find an energy E for which dE/dt ≤ CE and use Gronwall’s inequality which asserts
the following. If y(t) ≥ 0 satisfies dy/dt ≤ φy , where φ ≥ 0 and φ ∈ L1(0, T ), then
y(t) ≤ y(0)exp
(∫ t
0
φ(s) dx
).
4.5: [Qualifying exam 08/96] Let h ∈ C(R3) with h ≥ 0 and h = 0 for |x | ≥ a > 0. Let
u(x, t) be the solution of the initial-value problemutt − ∆u = 0 on R3 × (0,∞),
u(x, 0) = 0 on R3 × t = 0,ut(x, 0) = 0 on R3 × t = 0.
a) Show that u(x, t) = 0 in the set (x, t) : |x | ≤ t − a, t ≥ 0.b) Suppose that there exists a point x0 such that u(x0, t) = 0 for all t ≥ 0. Show that
u(x, t) ≡ 0.
4.6: [Qualifying exam 01/00] Suppose that q : R3 → R3 is continuous with q(x) = 0 for
|x | > a > 0. Let u(x, t) be the solution ofutt − ∆u = e iωtq(x) in R3 × (0,∞),
u(x, 0) = 0 on R3 × t = 0,ut(x, 0) = 0 on R3 × t = 0.
a) Show that e−iωtu(x, t)→ v(x) as t →∞ uniformly on compact sets, where
∆v + ω2v = q,
Hint: You may take the result of Exercise 2.1 for granted.
b) Show that v satisfies the “outgoing” radiation condition∣∣∣∣(∂
∂r+ iω
)v(x)
∣∣∣∣ ≤C
|x |2as r = |x | → ∞.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
106Chapter 5: Fully Non-Linear First Order PDEs
5 Fully Non-Linear First Order PDEs
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
5.1 Introduction 106
5.2 Method of Characteristics 107
5.3 Exercises 121
5.1 Introduction
Solve F (Du(x), u(x), x) = 0 in Ω, Ω bounded and open in Rn.
• Method of characteristics
• Often time variable (conservation laws)
• existence of classical solution only for short times
• global existence for conservation laws via weak solutions (broaden the class of solutions,
solutions continuous and discontinuous).
Notation: Frequently, you set p = Du, z = u.
1st order PDE: F (p, z, x) = 0
F : Rn × R×Ω→ R
always f is smooth.
Example: Transport equation
ut(y , t) + a(y , t) ·Du(y , t) = 0 (5.1)Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics107
Set x = (y , t), p = (Du, ut). Then (5.1) is given by
F (p, z, x) = 0 with
F (p, z, x) =
(a(y , t)
1
)·(Du
ut
)
=
(a(x)
1
)·(Du(x)
ut(x)
)
F (p, z, x) =
(a(x)
1
)· ρ
5.2 Method of Characteristics
Idea: Reduce the PDE to a system of ODEs for x(s), z(s) = u(x(s)), p(s) = Du(x(s))
x(s)
Ω
Suppose u is a smooth solution of F (Du, u, x) = 0.
pi(s) =∂
∂suxi (x(s))
=
n∑
j=1
uxixj (x(s)) · xj(s)
1st order PDE, u ∈ C1 natural, but we get uxixj .
Eliminate them!
F (Du, u, x) = 0
∂
∂xiF (Du, u, x) = 0
=
n∑
j=1
∂F
∂pjuxixj +
∂F
∂zuxi +
∂F
∂xi= 0
To eliminate the second derivatives, we impose that
xj =∂F
∂pj(p(s), z(s), x(s))
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
108Chapter 5: Fully Non-Linear First Order PDEs
Then
pi(s) =
n∑
j=1
uxixj (x(s)) · xj(s)
=
n∑
j=1
uxixj (x(s)) · ∂F∂pj
(p(s), z(s), x(s))
= −∂F∂zuxi (x(s))− ∂F
∂xi
Thus,
pi(s) = −∂F∂zuxi (x(s))− ∂F
∂xi
Now find
z(s) =∂
∂su(x(s))
=
n∑
j=1
∂u
∂xj(x(s)) · xj(s)
=
n∑
j=1
pj · xj(s)
Thus,
z(s) =
n∑
j=1
pj · xj(s)
Result: If u is a solution of F (Du, u, x) = 0, and if
xj =∂F
∂pj(p(s), z(s), x(s)), then
p(s) = −∂F∂zp − ∂F
∂x
z(s) =∂F
∂p· p,
i.e., we get a system of 2n + 1 ODEs
x = Fpp = −Fzp − Fxz = Fp · p
Characteristics equations for the 1st order PDE. The ODE x = Fp is called the projected
characteristics equation. Issues:
• How do the boundary conditions relate to initial conditions for the ODEs?
• We can also have the following situation
Typically we can solve only locally, but not globally.Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics109
ΩCompatibility condition.
Where to prescribe the B.C.?
Ω
x
t
• If we construct u(x) from the values of u along the characteristic curves, do we get a
solution of the PDE?
Examples:
1. Transport equation
ut(y , t) + a(y , t) ·Du(y , t) = 0
F (p, z, x) =
(a(x)
1
)· p = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
110Chapter 5: Fully Non-Linear First Order PDEs
Characteristic equations:
x = Fp =
(a(x)
1
)
x(s) = (y(s), t(s)) = (a(y , t), 1)
=⇒ t(s) = 1 =⇒ t = s
y(s) = a(y(s), s)
z(s) = Fp · p =
(a(s)
1
)· p = 0 (by the orig. PDE)
=⇒ z constant along characteristic curve
=⇒ u(y(s), s) = constant
=⇒ u determined from initial data.
(y(s), s)
t
RnΩ
2. Linear PDE: F (p, z, x) = b · p + cz
i.e. b(x) ·Du(x) + c(x)u(x) = 0
Characteristic equations:
x(s) = Fp = b(x(s))
z(s) = −c(x(s))u(x(s)) = −c(x(s))z(s)
x(s) = b(x(s))
z(s) = −c(x(s))z(s)
can solve without using equation for p.
Examples: x1ux2 − x2ux1 = u = 0 linear
F (p, z, x) = 0
F (p1, p2, z, x1, x2) = x1p2 − x2p1 − z = 0
= x⊥ · p − z,where
x⊥ =
(−x2
x1
)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics111
• x = Fp, x = x⊥
x1(s) = −x2(s), x1(0) = x0
x2(s) = x1(s), x2(0) = 0
I.C.
Solve subject to the initial data u = g on Γ = (x, 0) | x1 ≥ 0
(x0, 0)
Γ
• z = Fp · p = x⊥ · p = z by equation
z(0) = g(x0)
•
p = −Fx − Fzp =
(p2
−p1
)− (−1)p + suitable ICs, (see below)
˙(x1
x2
)=
(0 −1
1 0
)
︸ ︷︷ ︸A
(x1
x2
)
(x1
x2
)(0) =
(x0
0
)
(x1
x2
)(s) = eAs
(x0
0
)=
(cos s − sin s
sin s cos s
)(x0
0
)
x1(s) = x0 cos s
x2(s) = x0 sin scircular characteristics
z(s) = Ces = g(x0)es
Reconstruct u(x1, x2) from characteristics. Characteristic through (x1, x2) corresponds to
x0 =√x2
1 + x22 , (x1, x2) = x0e
iθ. u(x1, x2) = g(x0)e iθ
g
(√x2
1 + x22
)exp
(arctan
x2
x1
)= u(x1, x2)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
112Chapter 5: Fully Non-Linear First Order PDEs
(x, y)
x(s)
(x0, 0)
Γ
5.2.1 Strategy to Prove that the Method of Characteristics Gives a
Solution of F (Du, u, x) = 0
i.) Reduce initial data on Γ to initial data on a flat part of the boundary, Rn−1
ii.) Check that the structure of the PDE does not change
iii.) Set up characteristic equations with the correct initial data
iv.) Solve the characteristic equations
v.) Reconstruct u from the solution
vi.) Check that u is a solution
Here we go!Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics113
i.) Flattening the boundary locally, in a neighborhood of x0 ∈ Γ
Suppose that Γ is of class C11 i.e. Γ is locally the graph of a C1 function of n − 1
variables
0
u
U
yn
Rn−1
xn Γ = γ(x) close to x0Γ
x0U
x1 ∈ Rn−1
Ψ(y)
Φ(x)
Define transformations (y = (y ′, yn))
y = Φ(x) :
y ′ = x ′
yn = xn − γ(x ′)
x = Ψ(y) :
x ′ = y ′
xn = yn + γ(x ′)
If u : U → R, then define
u : U → R
u(y) = u(x) = u(Ψ(y))
or vice versa u(x) = u(y) = u(Φ(x))
If u satisfies F (Du, u, x) = 0 in U, what is the equation satisfied by u?
ii.)
∂u
∂xi(x) =
∂
∂xiu(Φ(x))
=
n∑
j=1
∂u
∂yi
∂Φj
∂xi
Φ : Rn → Rn, DΦ =
Φ11 · · · Φ1
n...
. . ....
Φn1 · · · Φn
n
=(
(DΦ)TDu(Φ(x)))i
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
114Chapter 5: Fully Non-Linear First Order PDEs
In matrix notation
Du(x) = DΦT (x) ·Du(Φ(x))
=⇒ Du(x) = DΦT (Ψ(y) ·Du(y)
F (Du(x), u(x), x) = 0
F (DΦT (Ψ(y)) ·Du(y), u(y),Ψ(y)) = F (Du(y), u(y), y) = 0,
where F (p, z, y) is F (DΦT (Ψ(y))p, z,Φ(y)) = F .
In particular, F is a smooth function if the boundary of U is smooth.
From now on assume that x0 = 0 and that Γ ⊆ Rn−1
x0 Γ
Goal: construct u
Solution in a neighborhood of x0 = 0.Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics115
iii.) Find initial conditions for the ODEs.
0to 0 we want x(s)characteristic through y
∀y ∈ Rn close
=⇒ x(0) = y ∈ Rn−1
=⇒ z(0) = u(x(0)) = g(x(0)) = g(y)
=⇒ pi(0) = ∂g∂yi
i = 1, . . . , n − 1
Tangential derivatives, compatibility condition. One degree of freedom, pn(0). We
must choose pn(0) to satisfy
F
(∂g
∂yi(y), . . . ,
∂g
∂xi(y), pn(0), g(y), y
)= 0.
Definition 5.2.1: A triple (p0, z0, x0) of initial conditions is admissible at x0 if z0 =
g(x0), p0,i = ∂g∂yi
(x0), i = 1, . . . , n − 1, F (p0, z0, x0) = 0.
iii.) Initial conditions
x(0) = y (∈ Rn−1)
z(0) = g(y)
pi(0) = ∂g∂xi
(y) i ∈ 1, . . . , n − 1pn(0) chosen such that F (p1(0), . . . , pn(0), g(y), y) = 0
(5.2)
(x0, z0, p0) is called admissible if
z0 = g(x0)
p0,i = ∂g∂xi
(x0) i = 1, . . . , n − 1
F (p0, z0, x0) = 0
n equations for
the n ICs p01, . . . , p
0n
Question: We want to solve close to x0, can we find admissible initial data, y , g(y),
q(y) for y close to x . F (q(y), g(y), y) = 0?
Lemma 5.2.2 (): (p0, z0, x0) admissible, and suppose that Fp(p0, z0, x0) 6= 0. Then
there exists a unique solution of (5.2) for y sufficiently close to x . The triple (p0, z0, x0)
is said to be non-characteristic.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
116Chapter 5: Fully Non-Linear First Order PDEs
Remark: If Γ is a smooth boundary then (p0, z0, x0) is non-characteristic if Fp(p0, z0, x0)·ν(x0) 6= 0, where ν(x0) is the normal to Γ at x0.
Proof: Use the implicit function theorem, G : Rn×Rn → Rn G(p, y) = (G1(p, y), . . . , Gn(p, y))
with G i(p, y) = pi − ∂g∂xi
(y). Gn(p, y) = F (p, g(y), y).
The triple (p0, z0, x0) admissible, G(p0, x0) = 0. Want to solve locally for p = q(y),
we can do this if det(Gp(p0, y0)) 6= 0.
Gp(p, y) =
1 0 · · · · · · 0 0
0. . .
. . .. . .
......
.... . .
. . .. . . 0
...
0 · · · · · · 0 1 0
Fp1 · · · · · · · · · · · · Fpn
det(Gp(p, y)) = Fpn(p, g(y), y)
=⇒ det(Gp(p0, x0) 6= 0
=⇒ there exists a unique solution p = q(y) close to x0
x0
(q(y), g(y), y) admissible initial data for the ODEs
iv.) Solution of the system of ODEs
x(s) = Fp(p(s), z(s), x(s))
z(s) = Fp(p(s), z(s), x(s)) · p(s)
p(s) = −Fx(p(s), z(s), x(s))− Fz(p(s), z(s), x(s))p(s)
Initial datax(0) = y
z(0) = g(y)
p(0) = q(y)
y ∈ Rn−1 parameter
Let
X(y , s) =
x(s)
z(s)
p(s)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics117
System:
˙X(y , s) = G(X(y , s))
X(y , 0) = (q(y), g(y), y)(5.3)
Lemma 5.2.3 (Existence of solution for (5.3)): If G ∈ Ck , X(y , 0) is Ck (as a
function of y) then there exists a neighborhood N of x0 ∈ Rn−1 and an s0 > 0 such
that (5.3) has a unique solution X : N × (−s0, s0)→ Rn × R×Ω and X is Ck .
( )
N
x0
v.) Reconstruction of u from the solutions of the characteristic equations
x
x0y
given x : find characteristic through x , i.e. find y = Y (x) and s = S(x) such that the
characteristic through y passes through x at time s.
Define: u(x) = z(Y (x), S(x))
Check: Invert x 7→ Y (x), S(x)
Lemma 5.2.4 (): Suppose that (p0, z0, x0) are non-characteristic initial data. Then
there exists a neighborhood V of x0 such that ∀x ∈ V there exists a unique Y and a
unique S with
x = X(Y (x), S(x))
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
118Chapter 5: Fully Non-Linear First Order PDEs
Proof: We need to invert X. Inverse functions. We can invert X(y , s) if DX(x0) 6= 0.
DX(x0) =
1 0 · · · · · · 0 Fp1
0. . .
. . .. . .
......
.... . .
. . .. . . 0
...
0 · · · · · · 0 1...
0 · · · · · · · · · 0 Fpn
To find ∂sX recall the characteristic equation, x = Fp =⇒ det(DX) = Fpn 6= 0 since
(p0, z0, x0) is non-characteristic.
vi.) prove that u is a solution of the PDE
u(x) = Z(Y (x), S(x))
p(x) = P (Y (x), S(x))
Crucial step Du(x) = p(x)
Step 1: f (y , s) := F (P (y , s), Z(y , s), X(y , s)) = 0
Proof:
f (y , 0) = F (P (y , 0), Z(y , 0), X(y , 0))
= F (q(y), g(y), y)
= 0
since (q(y), g(y), y) is admissible.
f (y , s) = FpP + Fz Z + Fx X
= Fp(−Fx − Fzp) + FzFpp + FxFp
= 0
=⇒ F (p(y , s), u(x), x) = 0.
Recall: Use capital letters to denote the dependence on y ,
P (y , s), Z(y , s), X(y , s)
solutions with initial data (q(y), g(y), y).
Invert the characteristics:
x Y (x), S(x) such that X(Y (x), S(x)) = x
Convention today: all gradients are rows.
Define:
u(x) := Z(Y (x), S(x))
p(x) := P (Y (x), S(x))
1. f (y , s) = F (P (y , s), Z(y , s), X(y , s)) ≡ 0 as a function of s.Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.2 Method of Characteristics119
2. We assert the following:
i.) Zs = P ·XTs = 〈P,Xs〉ii.) Zy = P ·Xy = P ·DyX
∂z
∂yi=
n−1∑
j=1
Pj∂X j
∂yii = 1, . . . , n − 1
Proof:
i.) Zs = Fp · p = Xs · p by the characteristic equations.
ii.) Before we prove the second identity we find
DyDsZ = DyZs
= P ·DyDsX +DsX ·DyP
Define
r(s) := DyZ − P ·DyX
and show that r(s) = 0.
Idea: find ODE for r and show that r = 0 is the unique solution.
ri(0) =∂g
∂xi(y)− q(y)
1. . .
1
for i = 1, . . . , n − 1
=⇒ ri(0) =∂g
∂xi(y)− qi(y) = 0
Since the triple (q(y), g(y), y) is admissible.
r(s) = DsDyZ −DsP ·DyX − P ·DsDyX= P ·DyDsX +DsX ·DyP−DsP ·DyX − P ·DsDyX
= DsX ·DyP −DsP ·DyXchar. eq. = Fp ·DyP − (−Fx − FzP )DyX − Fz ·DyZ
−(Fx − FzP )DyX
(Dy (F (P,Z,X)) = 0)
= FzP ·DyX − Fz ·DyZ= −Fz(DyZ − P ·DyX)
= −Fz · r(s)
∴
r(s) = −Fz · r(s)
r(0) = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
120Chapter 5: Fully Non-Linear First Order PDEs
Linear ODE for r with r(0) = 0 =⇒ r = 0 for s > 0.
3. Prove that Du(x) = p(s)
=⇒ F (P,Z,X) = F (Du(x), u(x), x) = 0
=⇒ u is a solution of the PDE
u(x) = Z(Y (x), S(x))
=⇒ Dxu = DyZ ·DxY +DsZ ·DxS(by 2.) = P ·DyX ·DxY + P ·DsX ·DxS
= P (DyX ·DxY +DsX ·DxS)
= P ·DxX(Y, S)
= P · Id = p
=⇒ Dxu = P .
Examples:
1. Linear, homogeneous 1st order PDE:
b(x) ·Du(x) + c(x)u(x) = 0
Local existence of solutions:
Initial data non-characteristic:
x0
Fp · ν(x0) 6= 0
ν(x0)
F (p, z, x) = b(x) · p + c(x)z
Fp = b
Can solve if Fp(p0, z0, x0) · ν(x0) = b(x0) · ν(x0) 6= 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
5.3 Exercises121
Local existence of b(x0) is not tangential
s
Characteristic equation:
x = Fp = b
(x(s) would be tangential to Γ if the initial conditions fail to be non-characteristic).
2. Quasilinear equation
b(x, u) ·Du(x) + c(x, u) = 0
Initial data non-characteristic Fp = b,
Fp(p0, z0, x0) · ν(x0) = b(x0, z0) · ν(x0) 6= 0.
Characteristics:
x = Fp, x(s) = b(x(s), z(s))
z = Fp · p= b(x(s), z(s)) · p(s)
= −c(x(s), z(s))
Closed system
x(s) = b(x(s), z(s)) x(0) = x0
z(s) = −c(x(s), z(s)) z(0) = g(x0)
for x and z , do not need the equation for p.
5.3 Exercises
5.1: [Evans 3.5 #2] Write down the characteristic equations for the linear equation
ut + b ·Du = f in Rn × (0,∞)
where b ∈ Rn and f = f (x, t). Then solve the characteristic equations with initial conditions
corresponding to the initial conditions
u = g on Rn × t = 0.
5.2: [Evans 3.5 #3] Solve the following first order equations using the method of charac-
teristics. Derive the full system of the ordinary differential equations including the correct
initial conditions before you solve the system!
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
122Chapter 5: Fully Non-Linear First Order PDEs
a) x1ux1 + x2ux2 = 2u with u(x1, 1) = g(x1).
b) uux1 + ux2 = 1 with u(x1, x2) = 12x1.
c) x1ux1 + 2x2ux2 + ux3 = 3u with u(x1, x2, 0) = g(x1, x2).
5.3: [Qualifying exam 08/00] Suppose that u : R × [0, T ] → R is a smooth solution of
ut + uux = 0 that is periodic in x with period L, i.e., u(x + L, t) = u(x, t). Show that
maxx∈R
u(x, 0)−minx∈R
u(x, 0) ≤ L
T.
5.4: Solve the nonlinear PDE of the first order
u3x1− ux2 = 0, u(x1, 0) = g(x1) = 2x
3/21 .
5.5: [Qualifying exam 01/90] Find a classical solution of the Cauchy problem
(ux)2 + ut = 0,
u(x, 0) = x2.
What is the domain of existence of the solution?
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
124Chapter 6: Conservation Laws
6 Conservation Laws
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
6.1 The Rankine-Hugoniot Condition 124
6.2 Shocks, Rarefaction Waves and Entropy Conditions 129
6.3 Viscous Approximation of Shocks 135
6.4 Geometric Solution of the Riemann Problem for General Fluxes 138
6.5 A Uniqueness Theorem by Lax 139
6.6 Entropy Function for Conservation Laws 141
6.7 Conservation Laws in nD and Kruzkov’s Stability Result 143
6.8 Exercises 148
6.1 The Rankine-Hugoniot Condition
Conservation law: ut + F (u)x = 0
initial data: u(x, 0) = u0(x) = g(x)
Example: Burger’s equation, F (u) = 12u
2
ut + F (u)x = ut +
(u2
2
)
x
= ut + u · ux = 0
Solutions via Method of Characteristics.
ut + F ′(u)ux = 0
Standard form: G(p, z, y) = 0, where p = (ux , ut) and y = (x, t).
G(p, z, y) = p2 + F ′(z)p1 = 0Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.1 The Rankine-Hugoniot Condition125
Characteristic equations: y = Gpz = Gp · p
y = (F ′(z), 1)
z = (F ′(z), 1) · p = p1F′(z) + p2 = 0 (equation)
Rewritten, we have
y1(s) = F ′(z(s)) y1(0) = x0 z(0) = g(x0)
y2(s) = 1 y2(0) = 0
=⇒ y2(s) = s (=⇒ identification t = s)
=⇒ z(s) = 0 =⇒ z(s) = constant = g(x0)
y1(s) = F ′(z(s)) = F ′(g(x0))
y1(s) = F ′(g(x0))s + x0
= F ′(g(x0))t + x0 (s = t)
=⇒ characteristic curves are straight lines.
Example: Burgers’ equation with g(x) = u(x, 0)
0 1
g(x) =
1 x ≤ 0
1− x 0 ≤ x < 1
0 x ≥ 0
Speed of the characteristic equations
F ′(g(x0)) = g(x0)
Characteristics:
Solution (local existence):
• Discontinuous “solutions” in finite time.
• Break down of classical solution.
• Same effect if you take smooth (C1, or more) initial data.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
126Chapter 6: Conservation Laws
x
1
t
0 1
0 112
t = 12
0 1
t = 1
• Equation ut + F (u)x = 0 involving derivatives is not defined.
• Weak solutions (or integral solutions).
• Go back from the PDE to an integrated version.
• v test function.
v smooth, with compact support in R× [0,∞).
• Multiply equation and integrate by parts.Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.1 The Rankine-Hugoniot Condition127
0 =
∫ ∞
0
∫
Rv(ut + F (u)x) dxdt
= −∫ ∞
0
∫
Rvt · u dxdt +
∫
Rv · u dx
∣∣∣∣∣
t=∞
t=0
−∫ ∞
0
∫
Rvx · F (u) dxdt +
∫ ∞
0
F (u) · v dt∣∣∣∣∣
x=∞
x=−∞
= −∫ ∞
0
∫
R(vt · u + vx · F (u)) dxdt −
∫
Rv(x, 0) · u(x, 0) dx
So, we have
∫ ∞
0
∫ ∞
−∞vt · u + vx · F (u) dxdt +
∫ ∞
−∞v(x, 0) · g(x) dx = 0 (6.1)
Definition 6.1.1: u is a weak solution of the conservation law ut +F (u)x = 0 if (6.1) holds
for all test functions v .
Question: What information is hidden in the weak formulation?
If u ∈ C1 is a weak solution, reverse the integration by parts and get
∫ ∞
0
∫
Rv(ut + F (u)x) dxdt = 0
for all smooth test functions.
=⇒ ut + F (u)x = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
128Chapter 6: Conservation Laws
6.1.1 Information about Discontinuous Solutions
t
VlVr
C
x
u is a solution in V = Vl ∪ Vr , u smooth in Vl and Vr and u can have a discontinuity across
C (e.g. u ≡ 1 on Vl and u ≡ 0 on Vr ).
Is this possible?
v test function. Support of v contained in Vl (Vr ) =⇒ ut + F (u)x = 0 in Vl (Vr ).
Support for v contains a part of C, and that v(x, 0) = 0.
(6.1)
∫∫
V
(vt · u + vx · F (u)) dxdt = 0
=
∫∫
Vl
(vt · u + vx · F (u)) dxdt +
∫∫
Vr
(vt · u + vx · F (u)) dxdt = 0
Now, let us look at∫∫
Vl
(vt · u + vx · F (u)) dxdtI.P.= −
∫∫
Vl
v (ut + F (u)x)︸ ︷︷ ︸0
dxdt −∫
C
(v · F (u)
v · u
)· ν dS
=
∫
C
(v · F (u) · ν1 + v · u · ν2) dS
Note that u here is the trace of u from Vl on the curve C, call this ul :∫
C
v(·F (ul) · ν1 + ul · ν2) dS
Same calculation for Vr with ν replaced by −ν∫∫
Vr
(vt · u + vx · F (u)) dxdt = −∫
C
v(F (ur ) · ν1 + ur · ν2) dS
Grand total:∫
C
v [F (ul)− F (ur )]ν1 + (ul − ur )ν2] dS = 0
=⇒ [F (ul)− F (ur )]ν1 + (ul − ur )ν2 = 0 along C.
Then ν = (1,−s) and
F (ul)− F (ur ) = s(ul − ur )Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.2 Shocks, Rarefaction Waves and Entropy Conditions129
ν = (1,−s)
C
(s , 1)
Suppose now that C is given by (x = s(t), t)
this is typically written as (adopting new notation)
[[F (u)]] = σ · [[u]] ,
where σ = s and [[w ]] = wl − wr = jump.
This is the Rankine-Hugoniot Condition
6.2 Shocks, Rarefaction Waves and Entropy Conditions
We return to our original example:
ut + u · ux = 0 g(x) =
1 x ≤ 0
1− x 0 ≤ x < 1
0 x ≥ 1
A discontinuity develops at t = 1, specifically, there is a jump for ul = 1 to ur = 0.
In the general situation, one solves the Riemann problem for the conservation laws
ut + F (u)x = 0 u(x, 0) =
ul x < 0
ur 0 ≤ x > 0
Example: The Riemann problem for Burger’s equation:
1. ul = 1, ur = 0. The discontinuity can propagate along a curve C with
s =[[F (u)]]
[[u]]=F (ul)− F (ur )
ul − ur=
12 − 0
1=
1
2
The characteristics are going into the curve C, a so-called shock wave.
2. ul = 0, ur = 1. Again, we have s = 12 ; but this time the characteristics are emerging
from the shock.
Is this the only solution? We can find another solution of Burger’s equation by the similarity
method, i.e. by selecting solutions of the form
u(x, t) =1
tαv( xtβ
)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
130Chapter 6: Conservation Laws
x
t
0
u = 0u = 1
x0
t
u = 1u = 0
u(x, t) =
0 x < 0xt0 ≤ x < t
1 x > t
x0
t
u = 0 u = 1
0
This leads to u(x, t) = xt . We know construct a different solution:
This solution is continuous and called a rarefaction wave. It is a weak solution.
Question: Which solution is the physically relevant solution?
No Ideas: The solution should be stable under perturbations, i.e. should look similar if we
smooth the initial data . The solution should be the limit of a viscous approximation of the
original equation (more later).
Recall that y = G(F ′(z), 1), i.e., F ′(ul) is the speed of the characteristics with initial data
ul .
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.2 Shocks, Rarefaction Waves and Entropy Conditions131
i.e., characteristics impinge into the shock.Lax entropy condition: F ′(ul) > s > F ′(ur)
s
F ′(ul)
F ′(ur)
Recall: Burger’s equation
ut +
(u2
2
)
x
= ut + u · ux = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
132Chapter 6: Conservation Laws
Rarefaction wave initial data:
0 x
t
0 x
t
Shock wave initial data:
s = 12
"non-physical" shock
x0
t
x0
t
x0
weak solutions not unique
ts = 12
Odd entropy conditions to select solutions:
Lax Characteristics going into the shock.
Viscous approximation: u as a limit of uε,
uεt + uε · uεx + ε · uεxx = 0
Hopf-Cole: Change of the dependent variables, w = Φ(u)
ut − a∆u + b|Du|2 = 0
u = g
wt = a∆w − |Du|2(aΦ′′(u)− bΦ′(u))
Choose Φ as a solution of aΦ′′ − bΦ′ = 0 =⇒ w solves the heat equation. One solution is
given by
Φ(t) = exp
(−bta
)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.2 Shocks, Rarefaction Waves and Entropy Conditions133
Choosing this Φ, we havewt − a∆w = 0
w(x, 0) = exp(−b·g(x)
a
)
6.2.1 Burger’s Equation
Define
v(x, t) :=
∫ x
−∞u(y , t) dt
assuming that the integral exists and that u, ux → 0 as x → −∞.
The equation for v :
vt − ε · vxx +1
2v2x = 0
Proof: Compute∫ x
−∞ut dy − ε · ux +
1
2u2 =
∫ x
−∞(ε · uxx − u · ux︸ ︷︷ ︸
( 12u2)x
) dy − ε · ux +1
2u2
= ε · ux −1
2u2
∣∣∣∣∣
x
−∞− ε · ux +
1
2u2
= ε · ux −1
2u2 − ε · ux +
1
2u2
= 0
=⇒ v solves vt − ε · vxx + 12v
2x = 0
v(x, 0) = h(x) =
∫ x
−∞g(y) dy
This is of the form of the model equation with a = ε, b = 12 .
w = exp
(−b · v
a
)solves
wt − ε · wxx = 0
w(x, 0) = exp(−b·h(x)
a−ε
)
The unique bounded solution is
w(x, t) =1√
4πεt
∫
Rexp
(−(x − y)2
4εt
)exp
(−b · h(y)
ε
)dy
Now, v(x, t) = − εb lnw
v(x, t) = − εb
ln
1√
4πεt
∫
Rexp
(−(x − y)2
4εt
)exp
(−b · h(y)
ε
)dy
and u(x, t) = vx(x, t)
u(x, t) =1
b
∫Rx−y
4t exp(− (x−y)2
4εt
)exp
(−b·h(y)
ε
)dy
∫R exp
(− (x−y)2
4εt
)exp
(−b·h(y)
ε
)dy
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
134Chapter 6: Conservation Laws
Formula for uε, let ε→ 0
Asymptotics of the formula for ε→ 0: Suppose that k and l are continuous functions, l
grows linearly, k grows at least quadratically. Suppose that k has a unique minimum point,
k(y0) = miny∈R
k(y)
Then
limε→0
∫R l(y) · exp
(−k(y)
ε
)dy
∫R exp
(−k(y)
ε
)dy
= l(y0)
Proof: See Evans
Evaluation for viscous approximation with
g(x) =
0 x < 0
1 x > 0
l(y) =x − yt
k(y) =(x − y)2
4t+h(y)
2
h(y) =
∫ y
−∞g(x) dx =
0 y < 0
y y ≥ 0=: y+
Find minimum of k :
Case 1: y < 0, k(y) = (x−y)2
4t
Minimum: y0 = x if x ≤ 0
y0 = 0 if x ≥ 0
Case 2: y ≥ 0, k(y) = (x−y)2
4t + y2
k ′(y) =x − y
2t+
1
2
k ′′(y) =1
2t> 0 =⇒ min
k ′(y) = 0, y = x − ty0 = x − t if x ≥ t
y0 = 0 if x ≤ t
Evaluation of the formula:
(1): x > t:
y ≤ 0: =⇒ y0 = 0, k(y0) = x2
4t
y ≥ 0: =⇒ u0 = x − t, k(x − t) = t4 + x−t
2 = −t4 + x
2
x2
4t>x
2⇐⇒ x2 > 2xt − t2 ⇐⇒ (x − t)2 ≥ 0
√
y0 = x − t, l(y) =x − yt
, l(x − t) =t
t= 1
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.3 Viscous Approximation of Shocks135
Analogous arguments for:
(2): 0 ≤ x ≤ ty0 = 0, l(y) = x
t
(3): x < 0, y0 = x , l(x) = 0
u(x, t) = limε→0
uε(x, t) =
0 x < 0xt 0 ≤ x ≤ t1 x > t
Rarefaction wave!
6.3 Viscous Approximation of Shocks
Burger’s equation: Shocks (RH condition), rarefaction waves
x0
ulur
ut + F (u)x = ut + F ′(u) · ux = 0
Lax criterion: F ′(ul) > s > F ′(ur )
Since F ′(ul/r ) is the speed of the characteristics on the left/right of x0.
More generally, find shock solutions of the Riemann problem
ut + F (u)x = 0
u(x, 0) =
ul x < 0
ur x > 0
Building block for numerical schemes that approximate initial data by piecewise constant
functions.
Viscous approximation: ut + F (u)x = ε · uxx
Special solutions: traveling waves
u(x, t) = v(x − st), s = speed of profile.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
136Chapter 6: Conservation Laws
Find v = v(y) such that v(y)→ ul as y → −∞v(y)→ ur as y → +∞
Equation for v : ut = −s · v ′ux = v ′
uxx = v ′′
ut + F ′(u) · ux = ε · uxx−s · v ′ + F ′(v) · v ′ = ε · v ′′
Define w(y) = v(yε
), v(y) = w(ε · y),
=⇒
−s · w ′ + F ′(w) · w ′ = w ′
w(y)→ ul as y → −∞w(y)→ ur as y → +∞
Suppose for the moment being that ul > ur .
0
ul
ur
Suppose we have a unique solution
−s · w ′ + F ′(w) · w ′ = (−s · w + F (w))′
Integrate −s · w + F (w) + C0 = w ′
C0 = constant of integration
Let Φ(w) = −s · w + F (w) + C0
If w(y)→ ul , w′(y)→ 0 as y → −∞, then
−s · ul + F (ul) + C0 = 0
⇐⇒ C0 = s · ul − F (ul). Then
Φ(w) = −s(w − ul) + F (w)− F (ul)
If w → ur , w′ → 0 as y → +∞, =⇒ Φ(ur ) = 0
= −s(ur − ul) + F (ur )− F (ul) = 0
⇐⇒ s =F (ur )− F (ul)
ur − ulR.H.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.3 Viscous Approximation of Shocks137
w defined by the ODE w ′ = Φ(w)
Φ at least Lipschitz =⇒ local existence and uniqueness of solutions. Moreover, Φ(ur ) = 0,
Φ(ul) = 0 =⇒ w ≡ ur , w ≡ ul are solutions.
=⇒ ul > w > ur for all times.
can not happen
u
ur
ul
Analogously, there is no u with ul > u > ur with Φ(u) = 0. Φ has a sign on [ur , ul ] =⇒ w
decreasing =⇒ Φ(u) < 0 on (ur , ul)
Φ(u) = −s(u − ul) + F (u)− F (ul)
Φ(u) < 0 for u ∈ (ur , ul). So,
F (u) < s(u − ul) + F (ul)
F (u) < F (ul) +F (ur )− F (ul)
ur − ul︸ ︷︷ ︸σ
(u − ul)
F (ur)
ur ul
F (ul)
=⇒ F lies under the line segment connecting (ur , F (ur )) and (ul , F (ul)).
Remarks:
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
138Chapter 6: Conservation Laws
1. This is always true (for all choices of ul and ur ) if F is convex.
2. Take the limit u ul
s(u − ul) > F (u)− F (ul)
⇐⇒ s <F (u)− F (ul)
u − ulIn the limit s ≤ F ′(ul)As u ur ,
F (u)− F (ur ) < F (ul)− F (ur ) + s(ur − ul + u − ur )= s(u − ur )
=⇒ s >F (u)− F (ur )
u − urand in the limit s ≥ F ′(ul)
Two conditions together yield: F ′(ul) ≥ s ≥ F ′(ur )
(Lax Shock admissibility criterion). So seeking traveling wave solution, we recover
the R.H. and Lax conditions.
3. Combining the inequalities
F (u)− F (ul)
u − ul> s >
F (u)− F (ur )
u − ur∀u ∈ (ur , ul)
Oleinik’s entropy criterion.
6.4 Geometric Solution of the Riemann Problem for General
Fluxes
K
u1 u2 ulur
slope s1slope s2 F
u
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.5 A Uniqueness Theorem by Lax139
Let K = u, y : ur ≤ u ≤ ul , y ≤ F (u)The convex hull is bounded by line segments tangent to the graph of f and arcs on the
graph of f . The entropy solution consists of shocks (corresponding to the line segments)
and rarefaction waves (corresponding to the arcs).
s2(t)s1(t)
0
connects u1 and u2
u = uru = ul
6.5 A Uniqueness Theorem by Lax
Conservation Law ut + F (u)x = 0
A weak solution (say piecewise smooth with discontinuities) is an entropy solution if Oleinik’s
entropy criterion holds.
F (u)− F (ul)
u − ul> s >
F (u)− F (ur )
u − ur
for all discontinuities of u.
Theorem 6.5.1 (Lax): Suppose u, v are two entropy solutions. Then
µ(t) =
∫
R|u(x, t)− v(x, t)| dx
is decreasing in time.
Corollary 6.5.2 (Uniqueness): There exists at most on entropy solution with given initial
data, u(x, 0) = u0(x), x ∈ R
Proof: µ(0) = 0, µ decreasing =⇒ µ(t) = 0 ∀t =⇒ u(x, t) = v(x, t).
Proof of Theorem (6.5.1): Take u − vChoose xi(t) in such a way that u − v changes its sign at the points xi and has a sign on
xi , xi+1], namely (−1)i .
Now
µ(t) =∑
n
(−1)n∫ xn+1
xn
(u(x, t)− v(x, t)) dx
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
140Chapter 6: Conservation Laws
xi+2(t)
u − v
xi(t)
i + 1 even
xi+1(t)
Find µ(t), check that the jumps in the intervals (xi , xi+1) make no difficulties
µ(t) =∑
n
(−1)n∫ xn+1
xn
(ut − vt) dx
+(u(x−n+1, t)− v(x−n−1, t)) · xn+1 − (u(x+n , t)− v(x+
n , t)) · xn (6.2)
∫ xn+1
xn
(u − v)t dx = −∫ xn+1
xn
(F (u)− F (v))x dx
= −F (u(x−n+1, t))± F (v(x−n+1, t))± F (u(x+n , t))± F (v(x+
n , t))
Rewrite (6.2) formally:
µ(t) =∑
n
(−1)n
−(F (u)− F (v)
∣∣∣∣∣
x−n+1
x+n
+ (u − v) · x∣∣∣∣∣
x−n+1
x+n
=∑
n
(−1)n
F (u)− F (v)− (u − v) · x
∣∣∣∣∣x−n
+ F (u)− F (v)− (u − v) · x∣∣∣∣∣x+n
(−1)n(−1)n−1
xn+1xnxn−1
Case 1: u − v continuous at xn. u(xn, t) = v(xn, t) =⇒ no contribution to the sum at xn
Case 2: u jumps from ul to ur with ur < ul and v continuous, u − v changes sign at xn =⇒ur < v = v(xn) < ul
u − v < 0 on [xn, xn+1] =⇒ n is odd.
Term at xn:
−F (ul)− F (v)− (ul − v)︸ ︷︷ ︸F (ul )−F (v)
ul−v>s≥0
·xn + F (ur )− F (v)− (ur − v)︸ ︷︷ ︸s>
F (ur )−F (v)ur−v ≥0
·xn
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.6 Entropy Function for Conservation Laws141
=⇒ total contribution at xn is non-positive
=⇒ µ(t) ≤ 0
=⇒ µ is decreasing.
Analogous discussion for the remaining cases.
6.6 Entropy Function for Conservation Laws
Idea: Additional equations to be satisfied by smooth solutions, inequalities for non-smooth
solutions.
u smooth solution of ut + F (u)x = 0. Suppose you find a pair of functions (η,ψ)
η entropy, η convex, η′′ > 0
ψ entropy flux
such that
η(u)t + ψ(u)x = 0
η′(u) · ut + ψ′(u) · ux = 0
ut + F ′(u) · ux = 0
=⇒ η′(u) · ut + F ′(u) · η′(u) · ux = 0. This holds if
ψ′ = F ′ · η′
Suppose that uε is a solution of the viscous approximation
uεt + F (uε)x = ε · uεxx
suppose that (η,ψ) is an entropy-entropy flux pair with η convex.
=⇒ uεt · η′(uε) + F ′(uε) · η′(uε) · uεx = ε · η′(uε) · uεxx
⇐⇒ η(uε)t + ψ(uε) · uεx = ε(η′ · ux)x − ε · η′′(uε) · u2x
Integrate this on [x1, x2]× [t1, t2]
∫ t2
t1
∫ x2
x1
(η(uε)t + ψ(uε)x) dxdt = ε
∫ t2
t1
∫ x2
x1
(ε · η′(uε) · uεx)x dxdt
− ε∫ t2
t1
∫ x2
x1
ε · η′′(uε)uε 2x dxdt
︸ ︷︷ ︸≥0
=⇒∫ t2
t1
∫ x2
x1
(η(uε)t + ψ(uε)x) dxdt ≤ ε2
∫ t2
t1
−[η′(uε(x2, t)) · uεx(x2, t)
− η′(uε(x1, t)) · uεx(x1, t)]dxdt
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
142Chapter 6: Conservation Laws
Suppose that uε → u, and let ε→ 0:
∫ t2
t1
∫ x2
x1
(η(u)t + ψ(u)x) dxdt ≤ 0 (6.3)
if uεx is uniformly integrable, i.e.
∣∣∣∣∫ t2
t1
∫ x2
x1
(η′(uε) · uε)x dxdt∣∣∣∣ ≤ C.
This means that a solution u of ut + F (ux) = 0 has to satisfy
η(u)t + ψ(u)x = 0
in a suitable weak sense.
Example: Burger’s equation.
η(u) = u2, F (u) = u2
2
ψ′(u) = 2u · u = 2u2
ψ(u) =2
3u2
Possible choice: (η,ψ) =(u2, 2
3u2)
Suppose u has a jump across a smooth curve.
x1 + ∆xt1x1 ∆x
C
t2 + ∆t
s = speed of the curve
s = ∆x∆t
(6.3) =
∫ x2
x1
η(u)
∣∣∣∣∣
t2
t1
dx +
∫ t2
t1
ψ(u)
∣∣∣∣∣
x2
x1
dt
≈ (x2 − x1)(η(ul)− η(ur )) + (t2 − t1)(ψ(ur − ul))
= s · ∆t · (u2l − u2
r ) + ∆t2
3(u3r − u3
l )
= . . . = −1
6∆t (ul − ur )3
︸ ︷︷ ︸≥0
≤ 0
=⇒ jump admissible only if ul > ur .Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.7 Conservation Laws in nD and Kruzkov’s Stability Result143
6.7 Conservation Laws in nD and Kruzkov’s Stability Result
Conservation laws in n dimensions
ut + divx ~F (u) = 0,
where ~F : R→ Rn vector value flux.
~F (u) =
F1(u)
...
Fn(u)
Definition 6.7.1: A pair (η,ψ) is called an entropy-entropy flux pair if ~ψ′(u) = ~F ′(u)η′(u),
where
~ψ : R→ Rn, ψ′i(u) = F ′i (u)η′(u)
η : R→ R
and if η is convex.
Take a viscous approximation:
uεt + divx ~F (uε) = ε · ∆uε
uεt · η′(uε) +
n∑
i=1
∂
∂xiFi(u
ε) · η′(uε) = ε · ∆uεη′ (6.4)
(∂
∂xiFi(u
ε)
)η′(uε) = F ′i (u
ε) · η′(uε) ∂∂xi
uε
= ψ′i(uε)∂
∂xiuε
=∂
∂xi~ψ(uε)
(6.4) is equivalent to
η(uε)t + div (~ψ(uε)) = ε · div (Duε · η′(uε))− ε ·Duε ·Dη′(uε)= ε · div (D(η(uε)))− ε ·Duε ·Duε · η′′(uε)
Multiply this by a test function φ ≥ 0 with compact support.∫ ∞
0
∫
Rnφ[η(uε)t + div ~ψ(uε)] dxdt =
∫ ∞
0
∫
Rnφ[ε · ∆η(uε)− ε · η′′(uε)|Duε|2] dxdt
∫ ∞
0
∫
Rnφ[η(uε)t + div ~ψ(uε)] dxdt ≤
∫ ∞
0
∫
Rnε · φ · ∆η(uε) dxdt
If uε → u as ε→ 0, then∫ ∞
0
∫
Rnφ[η(u)t + div ~ψ(u)] dxdt ≤ 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
144Chapter 6: Conservation Laws
Definition 6.7.2: A bounded function u is a weak entropy solution of
ut + div ~F (u) = 0
u(x, 0) = u0(x)
if∫ ∞
0
∫
Rn(η(u) · φt + ~ψ(u) ·Dφ) dxdt ≤
∫
Rnφ(x, 0)η(u0(x)) dx
for all test functions φ ≥ 0 with compact support in Rn× [0,∞) and all entropy-entropy flux
pairs (η, ~ψ).
Additional conservation laws:
η(u)t + divx ~ψ(u) ≤ 0
Analogue of the Rankine-Hugoniot jump condition
n
v−
u−
v+
u+
Γ
Similar arguments to the 1D case show
νn+1 [[η]] +[[~ψ]]· ν0 ≤ 0
where ν = (ν0, νn+1) is the normal in space-time. Define n = ν0|ν0| , V = −νn+1
|ν0| . Then
V [[η]] ≥[[~ψ]]· n
n = 2
n
V = normal velocity of the interface.
By approximation, we can extend this setting to Lipschitz continuous η, and in particular to
Kruzkov’s entropy, η(u) = |u − k | k is constant.
~ψ(u) = sgn (u − k)(~F (u)− ~F (k))Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.7 Conservation Laws in nD and Kruzkov’s Stability Result145
Check: R.H. jump inequality s [[η]] ≥[[~ψ]]
with Kruzkov’s entropy in 1 dimension.
s(|ul − k | − |ur − k |) ≥ sgn (ul − k)(~F (ul)− ~F (k))− sgn (ur − k)(~F (ur )− ~F (k))
Suppose that ur < k < ul :
s(ul − k + ur − k) ≥ ~F (ul)− ~F (k) + ~F (ur )− ~F (k)
⇐⇒ s(ul + ur − 2k) ≥ ~F (ul) + ~F (ur )− 2~F (k)
⇐⇒ ~F (k) ≥ ~F (ul )+~F (ur )2 +
(k −
(ul+ur
2
))s
s =~F (ul)− ~F (ur )
ul − urSign error. Should recover Oleinik’s Entropy Criterion.
Correction: u+ = ur , u− = ul → Oleinik’s entropy criterion.
Theorem 6.7.3 (Kruzkov): Suppose that u1 and u2 are two weak entropy solutions of
ut + div ~F (u) = 0 which satisfies u1, u2 ∈ C0([0,∞); L1(Rn)) ∩ L∞((0,∞)× Rn). Then
||u1(·, t2)− u2(·, t2)||L1(Rn) ≤ ||u1(·, t1)− u2(,t1)||L1(Rn)
for all 0 ≤ t1 ≤ t2 <∞
Remark: This implies uniqueness if u1 and u2 have the same initial data.
Notation: L1(Rn) = space of all integrable functions with
||f ||L1(Rn) =
∫
Rn|f | dx
L∞ = space of all bounded measurable functions.
The first assumption means that the map
t 7→ Φ(t) = u(·, t) is continuous
i.e. for t0 fixed, ∀ε > 0, ∃δ > 0 such that ||Φ(t)−Φ(t0)||L1(Rn) ≤ ε if |t − t0| < δ. i.e.
∫
Rn|u(x, t0)− u(x, t)| dx < ε
Proof: (formal)
Kruzkov entropy-entropy flux:
η(u) = |u − k |, ~ψ(u) = sgn (u − k)(~F (u)− ~F (k))
∫
Rn
∫
R+
η(u)︸︷︷︸|u1−k|
φt + sgn (u1 − k)(~F (u1)− ~F (k)) ·Dφ dxdt ≥ 0
for all φ with compact support in Rn × (0,∞). Suppose you can choose k = u2(x, t) (it is
not clear you can do this since k is constant)∫
Rn
∫
R+
|u1(x, t)− u2(x, t)|φt + sgn (u1 − u2)[F (u1)− F (u2)](x, t) ·Dxφ dxdt ≥ 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
146Chapter 6: Conservation Laws
R
−en+1
K
t2
en+1
t1
RnR R −M(t2 − t1)
ν ∝ 1√M2+1
(x/|x |M
)
Define K ⊂ Rn × R+
Choose φ = χK , i.e.
φ(x, t) =
1 (x, t) ∈ K0 otherwise
(This will also have to be considered further as this function isn’t smooth).
Expectation: (Dx , Dt)χK = −νHn · L · ∂K (Hn is the surface measure)
——
One way to make this precise is to use integration by parts:
∫(Dxφ ·Dtφ)~ψ = −
∫φ · div ~ψ
——
Suppose all this is correct:
Top: −∫
t=t1, |x |≤R−M(t2−t1)|u1(x, t)− u2(x, t)| dx
Bottom: +
∫
t=t1, |x |≤R|u1(x, t)− u2(x, t)| dx
+
∫
A
−M√M2 + 1
|u1(x, t)− u2(x, t)|
− 1√M2 + 1
sgn (u1 − u2)(~F (u1)− ~F (u2)) · x|x |
dHn
where A := t1 < t < t2, |x | = R −M(t2 − t1).
3rd integral:
∫
A
−M|u1(x, t)− u2(x, t)| − sgn (u1 − u2)(~F (u1)− ~F (u2)) · x|x | dHn
≤∫
A
−M|u1(x, t)− u2(x, t)| − |~F (u1)− ~F (u2)| dHn ≤ 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.7 Conservation Laws in nD and Kruzkov’s Stability Result147
By assumption, |u1|, |u2| ≤ N. Take M = sup|u|≤N |F ′(u)|.∫
|x |≤R−M(t2−t1)
|u1(x, t2)− u2(x, t2)| dx ≤∫
|x |≤R|u1(x, t)− u2(x, t2)| dx+ ≤ 0
Remark:
The proof shows more:∫
|x |≤R|u1(x, t2)− u2(x, t2)| dx ≤
∫
|x |≤R+M(t2−t1)
|u1(x, t1)− u2(x, t1)| dx
This shows again finite speed of propagation.
Fixing the proof:
• φ = χK . Fix by taking a regularization of χK , i.e. with standard mollifiers.
• Choice k = u2(x, t)
“Doubling of the variables”. Take u2(y , s) and integrate in y and s for the entropy
inequality for u1:
0 ≤∫
Rn×R+
∫
Rn×R+
|u1(x, t)− u2(y , s)|φt
+ sgn (u1 − u2)(~F (u1)− ~F (u2)) ·Dxφdxdtdyds
Entropy inequality for u2, written in y and s. Take k = u1(x, t), integrate in x and t:
0 ≤∫
Rn×R+
∫
Rn×R+
|u1(x, t)− u2(y , s)|φs
+ sgn (u1 − u2)(~F (u1)− ~F (u2)) ·Dyφdxdtdyds
Add the two inequalities:
0 ≤∫
Rn×R+
∫
Rn×R+
|u1 − u2|(φs + φt)
+ sgn (u1 − u2)(~F (u1)− ~F (u2)) · (Dxφ+Dyφ) dxdtdyds
Trick: Chose φε to enforce y = x as ε→ 0∫R ρε = 1
ρε(t) =1
ερ
(t
ε
)
ρε(x) =1
εn
n∏
i=1
ρ(xiε
)
Choose
φ(x, t, y , s) = ~ψ
(x + y
2,s + t
2
)· ρε(x − y
2
)· ρε(t − s
2
)
~ψ smooth, compact support in Rn × (0,∞).
Find
φt + φs = ~ψt(x , t) · ρε(y) · ρε(s)
Dxφ+Dyφ = Dx ~ψ(x , t) · ρε(y) · ρε(s)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
148Chapter 6: Conservation Laws
ρ1/4
ρ1
−1
ρ ∈ C∞0 (−1, 1)
1
ρ1/2
x =x + y
2, y =
x − y2
, t =t + s
2, s =
t − s2
→ weighted integral, as ε→ 0 the integration is on y = 0, s = 0 i.e. x = y and t = s.
You get exactly the inequality we get formally with k = u2(x, t).
6.8 Exercises
6.1: Let f (x) be a piecewise constant function
f (x) =
1 x < −1,
1/2 −1 < x < 1,
3/2 1 < x < 2,
1 x > 2.
Construct the entropy solution of the IVP, ut + uux = 0, u(x, 0) = f (x) for small values of
t > 0 using shock waves and rarefaction waves. Sketch the characteristics of the solution.
why does the structure of the solution change for large times?
6.2: Motivation of Lax’s entropy condition via stability. Consider Burgers’ equation
ut + uux = 0
subject to the initial conditions
u0(x) =
0 for x > 0
1 for x < 0, u1(x) =
1 for x > 0
0 for x < 0.
The physically relevant solution should be stable under small perturbations in the initial data.
Which solution (shock wave or rarefaction wave) do you obtain in the limit as n →∞ if you
approximate u0 and u1 by
u0,n(x) =
0 for x < 0
nx for 0 ≤ x ≤ 1n
1 for x > 1n
, u1,n(x) =
1 for x < 0
1− nx for 0 ≤ x ≤ 1n
0 for x > 1n
?
6.3: Let the initial data for Burgers’ equation ut + uux = 0 be given by
f (x) =
ul for x < 0,
ul − ul−urL x for 0 ≤ x ≤ L,ur for x > L,
Gregory T. von Nessi
Versio
n::3
11
02
01
1
6.8 Exercises149
where 0 < ur < ul . Find the time t∗ when the profile of the solution becomes vertical. More
generally, suppose that the initial profile is smooth and has a point x0 where f ′(x0) < 0.
Find the time t∗ for which the solution of Burger’s equation will first break down (develop a
vertical tangent).
6.4: [Profile of rarefaction waves] Suppose that the conservation law
ut + f (u)x = 0
has a solution of the form u(x, t) = v(x/t). Show that the profile v(s) is given by
v(s) = (f ′)−1(s).
6.5: Verify that the viscous approximation of Burgers’ equation
ut + uux = εuxx
has a traveling wave solution of the form u(x, t) = v(x − st) with
v(y) = uR +1
2(uL − uR)
(1− tanh
((uL − uR)y
4ε
))
where s = (uL + uR)/2 is the shock speed given by the Rankine Hugoniot condition. Sketch
this solution. What happens as ε→ 0?
6.6: Consider the Cauchy problem for Burgers’ equation,
ut + uux = 0
with the initial data
u(x, 0) =
1 for |x | > 1,
|x | for |x | ≤ 1.
a) Sketch the characteristics in the (x, t) plane. Find a classical solution (continuous and
piecewise C1) and determine the time of breakdown (shock formation).
b) Find a weak solution for all t > 0 containing a shock curve. Note that the shock
does not move with constant speed. Therefore, find first the solution away from the
shock. Then use the Rankine-Hugoniot condition to find a differential equation for the
position of the shock given by (x = s(t), t) in the (x, t)-plane.
6.7: [Traffic flow] A simple model for traffic flow on a one lane highway without ramps leads
to the following first order equation which describes the conservation of mass,
ρt + vmax
(ρ
(1− ρ
ρmax
))
x
= 0, x ∈ R, t > 0.
In this model, the velocity of cars is related to the density by
v(x, t) = vmax
(1− ρ
ρmax
),
where we assume that the maximal velocity vmax is equation to one. Solve the Riemann
problem for this conservation law with
ρ(x, 0) =
ρL if x < 0,
ρR if x > 0.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
150Chapter 6: Conservation Laws
Under what conditions on ρL and ρR is the solution a shock wave and a rarefaction wave,
respectively? Sketch the characteristics. Does this reflect your daily driving experience? Find
the profile of the rarefaction wave.
6.8: [LeVeque] The Buckley-Leverett equations are a simple model for two-phase flow in
a porous medium with nonconvex flux
f (u) =u2
u2 + 12 (1− u)2
.
In secondary oil recovery, water is pumped into some wells to displace the oil remaining in the
underground rocks. Therefore u represents the saturation of water, namely the percentage
of water in the water-oil fluid, and varies between 0 and 1. Find the entropy solution to the
Riemann problem for the conservation law ut + f (u)x = 0 with initial states
u(x, 0) =
1 if x < 0,
0 if x > 0.
Hint: The line through the origin that is tangent to the graph of f on the interval [0, 1]
has slope 1/(√
3− 1) and touches at u = 1/√
3. If you are proficient with Mathematica or
Matlab you could try to plot the profile of the rarefaction wave.
6.9: Let u0(x) = sgn(x) be the sign function in R and let a be a constant. Let u(x, t)
be the solution of the transport equation
ut + aux = 0,
and let uε(x, t) be the solution of the so-called viscous approximation
uεt + auεx − εuεxx = 0,
both subject to the initial condition u(·, 0) = uε(·, 0) = u0.
a) Derive representation formulas for u and uε!
Hint: For the viscous approximation, consider v ε(x, t) = uε(x + at, t).
b) Prove the error estimate
∫ ∞
−∞|u(x, t)− uε(x, t)| dx ≤ C
√εt, for t > 0.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
152Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7 Elliptic, Parabolic and Hyperbolic Equations
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
7.1 Classification of 2nd Order PDE in Two Variables 153
7.2 Maximum Principles for Elliptic Equations of 2nd Order 158
7.3 Maximum Principles for Parabolic Equations 172
7.4 Exercises 178
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.1 Classification of 2nd Order PDE in Two Variables153
7.1 Classification of 2nd Order PDE in Two Variables
a(x, y) · uxx + 2b(x, y) · uxy + c(x, y) · uyy + d(x, y) · ux + e(x, y) · uy + g(x, y) · u = f (x, y)
General mth order PDE
L(x,D)u =∑
|α|≤maα(x) ·Dαu
where u : Rn → RPrinciple part:
Lp(x,D) =∑
|α|=maα(x)Dα
The symbol of the PDE is defined to be
L(x, iξ) =∑
|α|≤maα(x) · (iξ)α
(you take iξ because of Fourier transform)
ξ ∈ Rn, replace ∂∂xj
by iξj .
Examples: Laplace ∆ =∑j∂2
∂x2j
= L
L(x, iξ) =
n∑
j=1
(iξj)n = −|ξ|2
Heat Equation: t = n + 1.
Lu =∂u
∂xn+1− ∆u =
Symbol:
L(x, iξ) = iξn+1 −n∑
j=1
(iξj)2
Wave equation:
L =∂2
∂x2n+1
− ∆
L(x, iξ) = (iξn+1)2 −n∑
j=1
(iξj)2
Principle part of the symbol:
Lp(x, iξ) =∑
|α|=maα(x) · (iξ)α
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
154Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
Examples:
∆ : Lp = −|ξ|2heat: Lp = ξ2
1 + · · ·+ ξ2n
wave: Lp = −ξ2n+1 + ξ2
1 + ·+ ξ2n
2nd order equation in two variables
Lp(x,D) = a(x, y)∂2
∂x2+ 2b(x, y)
∂2
∂x∂y+ c(x, y)
∂2
∂y2
Lp(x, iξ) = a(x, y)ξ21 + 2b(x, y)ξ1ξ2 + c(x, y)ξ2
2
=
(ξ1
ξ2
)T ( −a −b−b −c
)
︸ ︷︷ ︸A
(ξ1
ξ2
)
Definition 7.1.1: The PDE is
elliptic if detA > 0
parabolic if detA = 0
hyperbolic if detA < 0.
Typical situation:
(λ1 0
0 λ2
)
Equation: λ1uxx + λ2uyy+ lower order terms
elliptic: −A =
(1 0
0 1
)uxx + uyy = ∆
parabolic: −A =
(1 0
0 0
)uxx + lower order terms
hyperbolic: −A =
(1 0
0 −1
)uxx − uyy = = wave
Remarks: This definition even allows an ODE in the parabolic case (not good).
Better definition: parabolic is ut− elliptic = f .
ut − ∆u = f
Two reasons why this determinant is important:
(A) Given Cauchy data along a curve, C(x(s), y(s)) parametrized with respect to arc
length, that is
u(x(s), y(s)) = u0(s)
∂u
∂n(x(s), y(s)) = v0(s).
Question: Conditions of C that determine the second derivatives of a smooth solution.
Tangential derivative
Du · ~T = ux x + uy y = w0Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.1 Classification of 2nd Order PDE in Two Variables155
n = (−y , x)(x , y)
Normal derivative Du · ~n = −ux y + uy x = v0
uxx x2 + 2uxy x y + uyy y
2 + ux x + uy y = w0(s)
−uxx x y − uxy y2 + uxy y2 + uxy x
2 + uyy x y − ux y + uy x = v0(s)
uxx x2 + 2uxy x y + uyy y
2 = p(s)
−uxx x y + (x2 − y2)uxy + uyy x y = r(s)
auxx + 2buxy + cuyy = q(s)
We can solve for the 2nd derivatives along the curve if
det
∣∣∣∣∣∣
x2 2x y y2
−x y x2 − y2 x y
a 2b c
∣∣∣∣∣∣6= 0
det
∣∣∣∣∣∣
x2 2x y y2
−x y x2 − y2 x y
a 2b c
∣∣∣∣∣∣= a(2x2y2 − x2y2 − x2y2 + y4)
−2b(x3y + xy3) + c(x4 − x2y2 + 2x2y2)
= a(y4 + x2y2)− 2b(x3y + x y3)
+c(x4 + x2y2) 6= 0
Divide by (x y)2:
a
(y2
x2+ 1
)− 2b
(y
x+x
y
)+ c
(x2
y2+ 1
)6= 0
⇐⇒ a
(dy2
dx2+ 1
)− 2b
(dy
dx+dx
dy
)+ c
(dx2
dy2+ 1
)6= 0
⇐⇒(
1 +
(dx
dy
)2)·(a
(dy
dx
)2
− 2bdy
dx+ c
)
︸ ︷︷ ︸6=0
6= 0
at2 − 2bt + c = 0
a
(t2 − 2b
at +
c
a
)= 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
156Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
⇐⇒(t − b
a
)2
− b2
a2+ac
a2= 0
⇐⇒ t =b
a±√−ac + b2
a
Non-trivial solutions if
det
(a b
b c
)= ac − b2 ≤ 0
elliptic equation → 2nd derivative are determined.
Definition 7.1.2: A curve C is called a characteristic curve for L if
a
(dy
dx
)2
− 2bdy
dx+ c = 0.
(b) Curves of discontinuity
a, . . . , f , g continuous.
u solution of Lu = f , u, ux , uy continuous, uxx , uxy , uyy continuous except on a curve
C, jump along C. What condition has C to satisfy.
C = (x(s), y(s))
u l
ur
ulx(x(s), y(s)) = urx(x(s), y(s)).
Differentiate along the curve:
ulxx x + ulxy y = urxx x + urxy y
[[uxx ]] x + [[uxy ]] y = 0 (7.1)
Analogously: uly (x(s), y(s)) = ury (x(s), y(s))
=⇒ [[uxy ]] x + [[uyy ]] y = 0 (7.2)
Equation holds outside the curve, all 1st order derivatives and coefficients are contin-
uous =⇒ along the curve
a [[uxx ]] + 2b [[uxy ]] + c [[uyy ]] = 0 (7.3)
=⇒ linear system.
x y 0
0 x y
a 2b c
[[uxx ]]
[[uxy ]]
[[uyy ]]
= 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.1 Classification of 2nd Order PDE in Two Variables157
Non-trivial solution only if
det
x y 0
0 x y
a 2b c
= ay2 − 2bx y + cx2 = 0
⇐⇒ a
(dy
dx
)2
− 2bdy
dx+ c = 0
Jump discontinuity only possible if C is characteristic,
Elliptic: no characteristic curve
Parabolic: one characteristic
Hyperbolic: two characteristic
dy
dx=b
a±√b2 − aca
Laplace: a = c = 1, b = 0 dydx = ±
√−1. No solution
Parabolic: a = 1, c = 0, b = 0 dydx = 0 y = const. characteristic.
Hyperbolic: a = 1, c = −1, b = 0 dydx = ±1, y = ±x characteristic.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
158Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7.2 Maximum Principles for Elliptic Equations of 2nd Order
Definition 7.2.1: L is an elliptic operator in divergence form if
L(x,D)u = −n∑
i ,j=1
∂i(ai j(x)∂ju) +
n∑
i=1
bi(x)∂iu + c(x)u(x)
and if the matrix (ai j) is positive definite, i.e.,
n∑
i ,j=1
ai j(x)ξiξj ≥ λ(x)|ξ|2
for all x ∈ Ω, all ξ ∈ Rn, λ(x) > 0
Remarks: Usually elliptic means uniformly elliptic, i.e.
(1) λ(x) ≥ λ0 > 0 ∀x ∈ Ω.
(2) Divergence form because L(x,D)u = −div (ADu) + ~b ·Du + cu where A = (ai j)
Definition 7.2.2: L is an elliptic operator in non-divergence form if
L(x,D) = −n∑
i ,j=1
ai j(x)∂i∂ju +
n∑
i=1
bi(x)∂iu + c(x)u(x)
and
n∑
i ,j=1
ai jξiξj > λ(x)|ξ|2
for all x ∈ Ω, ξ ∈ Rn
Remark: If ai j ∈ C1, then an operator in non-divergence form can be written as an operator
in divergence form. We may also assume that ai j = aj i .
Definition 7.2.3: A function u ∈ C2 is called a subsolution (supersolution) if L(x,D)u ≤ 0
(L(x,D)u ≥ 0)
Remark: n = 1, Lu = −uxxsubsolution −uxx ≤ 0 ⇐⇒ uxx ≥ 0 ⇐⇒ u convex =⇒ u ≤ v if vxx = 0 on v = u at the
endpoint of an interval.
Remark: We can not expect maximum principles to hold for general elliptic equations. Ob-
struction: the existence of eigenvalues and eigenfunctions.
Example: −u′′ = λu
λ = 1, u(x) = sin(x) on [0, 2π]
=⇒ sin x solves −u′′ = u
There is no maximum principle for −uxx − u = L.
Big difference between x ≥ 0 and c < 0.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order159
2ππ
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
160Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7.2.1 Weak Maximum Principles
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order161
7.2.1.1 WMP for Purely 2nd Order Elliptic Operators
Again we take Ω ⊂ Rn. We now consider the following differential operator:
Lu :=
n∑
i ,j=1
ai jDi ju +
n∑
i=1
biDiu + cu,
where ai j , bi , c : Ω→ R. Now, we state the following.
Definition 7.2.4: The operator L is said to be elliptic(degenerate elliptic) if A := [ai j ] >
0(≥ 0) in Ω.
Remarks:
i.) In the above definition A > 0 means the minimum eigenvalue of the matrix A is > 0.
More explicitly, this means that
n∑
i ,j=1
ai jξiξj > 0, ∀ξ ∈ Rn.
ii.) The last definition indicates that parabolic PDE (such as the Heat Equation) are really
degenerate elliptic equations.
For convenience, we will be using the Einstein summation convention for repeated indicies.
This means that in any expression where letters of indicies appear more than once, a sum-
mation is applied to that index from 1 to n. With that in mind, we can write our general
operator as
Lu := ai jDi ju + biDiu + cu.
Now, we move onto the weak maximum principle for our special case operator:
Lu = ai jDi ju.
Theorem 7.2.5 (Weak Maximum Principle): Consider u ∈ C2(Ω)∩C0(Ω) and f : Ω→ R.
If Lu ≥ f , then
u ≤ max∂Ω
u +1
2
(sup
Ω
|f |T rA
)·Diam (Ω)2 , (7.4)
where T rA is the trace of the coefficient matrix A (i.e. T rA =∑ni=1 a
i i).
Note: If f ≡ 0, the above reduces to the result of the first weak maximum principle we
proved. Namely, u ≤ max∂Ω u.
Proof: Without loss of generality we may translate Ω so that it contains the origin. Again,
we consider an auxiliary function:
v = u + k |x |2,
where k is an arbitrary constant to be determined later. Since the Hessian Matrix D2u is
symmetric, we may choose a coordinate system such that D2u is diagonal. In this coordinate
system, we calculate
Lv = Lu + 2kai jSi j = Lu + 2k · T rA,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
162Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
where Si i = 1 and Si j = 0 when i 6= j . Now, suppose that v attains an interior maximum at
y ∈ Ω. From calculus, we then know that D2v(y) ≤ 0, which implies that ai jDi j ≤ 0 as Ais positive definite. Now, if we choose
k >1
2sup
Ω
|f |T rA ,
our above calculation indicates that Lv > 0; a contradiction. Thus,
v ≤ max∂Ω
v
which implies the result as Ω contains the origin (which indicates that |x |2 ≤ Diam (Ω)2).
Example 3: Now we will apply the weak maximum principle to the second most famous
elliptic PDE, the Minimal Surface Equation:
(1 + |Du|2)∆u −DiuDjuDi ju = 0.
This is a quasilinear PDE as only derivatives of the 1st order are multiplied against the 2nd
order ones (a fully nonlinear PDE would have products of second order derivatives). We will
now show that this is indeed an elliptic PDE. Recalling our special case Lu = ai jDi ju, we
can rewrite the minimal surface equation in this form with
ai j = (1 + |Du|2)Si j +DiuDju,
with S being as it was in the previous proof. Now, we calculate
ai jξiξj = (1 + |Du|2)Si jξiξj −DiuDjuξiξj= (1 + |Du|2)|ξ|2 −DiuDjuξiξj= (1 + |Du|2)|ξ|2 − (Diuξi)
2
≥ (1 + |Du|2)|ξ|2 − |Du|2|ξ|2
= |ξ|2.
To get the third equality in the above, we simply relabeled repeated indicies. Since one sums
over a repeated index, it’s representation is a dummy index, whose relabeling does not affect
the calculation.
From this calculation, we conclude this equation is indeed elliptic. Now, upon taking f = 0
and replacing u by −u, we can now apply the weak maximum principle to this equation.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order163
7.2.1.2 WMP for General Elliptic Equations
Now, we will go over the weak maximum principle for operators having all order of derivatives
≤ 2.
Theorem 7.2.6 (Weak Maximum Principle): Consider u ∈ C2(Ω) ∩ C0(Ω); ai j , bi , c :
Ω→ R; c ≤ 0; and Ω bounded. If u satisfies the elliptic equation (i.e. [ai j ] = A > 0)
Lu = ai jDi j + biDiu + cu ≥ f , (7.5)
then
u ≤ max∂Ω
u+ + C
(n,Diam (Ω) , sup
Ω
|b|λ
)· sup
Ω
|f |λ, (7.6)
where
λ = min|ξ|=1
ai jξiξj and u+ = maxu, 0.
Note: If ai j is not constant, then λ will almost always be non-constant on Ω as well.
Consequences:
• If f = 0, then u ≤ max∂Ω u+.
• If f = c = 0, then u ≤ max∂Ω u+.
• If Lu ≤ f one gains a lower bound
u ≥ max∂Ω
u− − C supΩ
|f |λ,
where u− = minu, 0.
• If we have the PDE Lu = f , then one has both an upper and lower bound:
|u| ≤ max∂Ω|u|+ C sup
Ω
|f |λ.
• Uniqueness of the Dirichlet Problem. Recall, that the problem is as follows
PDE: Lu = f in Ω
u = φ on ∂Ω.
So if one has u, v ∈ C2(Ω)∩C0(Ω) with Lu = Lv in Ω, and u = v on ∂Ω, then u ≡ vin Ω. The proof is the simple application of the weak maximum principle to w = u−v ,
which is obviously a solution of PDE as it’s linear.
Proof of Weak Max. Principle: As in previous proofs, we analyze a particular auxiliary function
to prove the result.
Aside: v = |x |2 will not work in this situation as Lv = 2 · T rA+ 2bixi + c |x |2; the bixi may
be a large negative number for large values of |x |.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
164Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
xi
x0
ξ
d
O
Ω− = x ∈ Ω | u < 0
Ω+ = x ∈ Ω | u > 0
Let us try
v = eα(x,ξ),
where ξ is some arbitrary vector. Also, we will consider Ω+ = x ∈ Ω | u > 0 instead of Ω,
for the rest of the proof. Clearly, this does not affect the result as we are seeking to bound
u by max∂Ω u+. With that, we have on Ω+
L0u := ai jDi ju + biDiu ≥ −cu + f
≥ f .
Now, we wish to specify which vector ξ is. For the following, please refer to the above figure.
We first choose our coordinates so that the origin corresponds to a boundary point of Ω such
that there exists a hyperplane through that point with Ω lying on one side of it. ξ is simply
taken to be the unit vector normal to this hyperplane. Next, we calculate
L0v = (α2ai jξiξj + αbiξi)eα(x,ξ)
= (α2ai jξiξj + αbiξi)eα(x,ξ)
≥ αλeα(x,ξ),Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order165
provided we choose α ≥(
supΩ|b|λ + 1
).
Aside: The calculation for the determination of α is as follows:
α2ai jξiξj + αbiξi ≥ α2λ+ αbiξi .
So, α2λ+ αbiξi ≥ αλ implies that
αλ ≥ λ− biξi=⇒ α ≥ sup
Ω
(1− b
iξiλ
)
So, clearly we have the desired inequality if we choose α ≥ b + 1, where b := supΩ|b|λ .
Given our choice of ξ, it is clear that (x, ξ) ≥ 0 for all x ∈ Ω. Thus, one sees that
L0v ≥ αλ. So, one can ascertain that
L0(u+ + kv) ≥ f + αλk > 0 in Ω+,
provided k is chosen so that
k >1
αsup
Ω
|f |λ.
Now, the proof proceeds as previous ones. Suppose w := u+ + kv has a positive maximum
in Ω+ at a point y . In this situation we have
[Di jw(y)] ≤ 0 =⇒ ai jDi jw(y) ≤ 0, biDiw(y) = 0,
which implies Lw ≤ 0; a contradiction. So, this implies the second inequality of the following:
u +e0
αsup
Ω
|f |λ≤ w
≤ max∂Ω+
w
≤ max∂Ω+
u+ +eαd
α· sup
Ω
|f |λ
≤ max∂Ω
u+ +eαd
α· sup
Ω
|f |λ,
again where α = b + 1 and d is the breadth of Ω in the direction of ξ. From the above, we
finally get
u ≤ max∂Ω
u+ +eαd − 1
α· sup
Ω
|f |λ.
Remark: Given the above, one needs |b|λ and |f |λ bounded for the weak maximum principle to
give a non-trivial result. Then one may apply the weak maximum principle to get uniqueness.
Theorem 7.2.7 (weak maximum principle, c = 0): Ω open, bounded, u ∈ C2(Ω)∩C(Ω).
L uniformly elliptic (for divergence form) Lu ≤ 0. Then
maxΩu = max
∂Ωu
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
166Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
Analogously, if Lu ≥ 0, then
minΩu = min
∂Ωu
Proof:
(1) Lu < 0 and u has a maximum at x0 ∈ Ω. Du(x0) = 0 and
(∂2u
∂xi∂xj(x0)
)
i ,j
= D2u(x0)
is negative semidefinite.
Fact from linear algebra: A, B symmetric and positive definite, then A : B = tr (AB) ≥0.
Q orthogonal, QAQT = Λ,
λ = diag (λ1, . . . , λn), λi ≥ 0
tr (AB) = tr (QABQT )
= tr (QAQTQBQT )
= tr (ΛB)
B = QBQT . B symmetric and positive semidefinite =⇒ Bi i ≥ 0.
tr (ΛB) =
n∑
i=1
λi Bi i ≥ 0
tr (AB) =
n∑
i=1
(AB)i i
=
n∑
i=1
n∑
j=1
Ai jBj i
=
n∑
i ,j=1
Ai jBi j
= A : B
Lu = −n∑
i ,j=1
ai j∂i ju +
n∑
i=1
bi∂iu
Lu(x0) = −(ai j) : D2u(x0) + 0
(ai j) symmetric and positive definite. −D2u(x0) symmetric and positive semidefinite.
Conclusion:
0 >︸︷︷︸assump.
Lu(x0) ≥ 0
Contradiction, u can not have a maximum at x0.Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order167
(2) Lu(x0) = 0.
Consider uε(x) = u(x) + εerx1 ,
ε > 0, r > 0, r = r(x) chosen later.
Luε = Lu + εLerx1 ≤ εLerx1
= ε(−a11r2 + b1r)erx1
a11 > 0 (since∑ai jξiξj ≥ λ0|ξ|2, take ξ1 = e1 = (1, 0, . . . , 0), a11 ≥ λ0 > 0)
b = b(x) by assumption in C(Ω).
|b(x)| ≤ B = maxΩ |b|
Luε ≤ ε(−a11r2 + Br)erx1
= εr(−a11r + B)erx1 < 0
if r > 0 is big enough.
By the previous argument,
maxΩuε = max
∂Ωuε
As ε→ 0, uε → u and in the limit
maxΩu = max
∂Ωu
(In fact, r is an absolute constant, r = r(λ0, B)).
Corollary 7.2.8 (Uniqueness): There exists at most one solution of the BVP Lu − f in Ω,
u = u0 in ∂Ω with u ∈ C2(Ω) ∩ C(Ω).
Proof: By contradiction with the maximum and minimum principle.
Theorem 7.2.9 (weak maximum principle, c ≥ 0): Suppose u ∈ C2(Ω) ∩ C(Ω), and that
c ≥ 0.
i.) If Lu ≤ 0, then
maxΩu ≤ max
∂Ωu+
ii.) If Lu ≥ 0, then
minΩu ≥ −max
∂Ωu−
where u+ = max(u, 0), where u− = −min(u, 0),
that is, u = u+ − u−.
Proof: Proof for ii.) follows from i.) with −u.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
168Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
i.) Let Ω+ = Ω, u(x) > 0 and define Ku = Lu − cu. If Ω+ = ∅, then the assertion is
obviously true.
On Ω+, Ku = Lu − cu ≤ 0 − cu ≤ 0, i.e. on Ω+, u is a subsolution for K and
we may use the weak maximum principle for K.
If Ω+ 6= ∅, then use the weak maximum principle for K on Ω+
maxΩ
+u(x) ≤ max
∂Ω+u(x) = max
∂Ωu+ (7.7)
Ω
Ω+On ∂Ω+ ∩Ω we have u = 0.
Since u is not less that or equal to zero on Ω,
maxΩ
+u(x) = max
∂Ω+u(x) = max
∂Ωu+
where the second equality is due to (7.7).
Recall: Weak Maximum Principles Lu ≤ 0
c = 0 : maxΩu = max
∂Ωu
c ≥ 0 : maxΩu ≤ max
∂Ωu+
Now, we consider strong maximum principles.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order169
7.2.2 Strong Maximum Principles
Theorem 7.2.10 ((Hopf’s maximum principle; Boundary point lemma)): Ω bounded,
connected with smooth boundary. u ∈ C2(Ω)∩C(Ω). Subsolution Lu ≤ 0 in Ω and suppose
that x0 ∈ ∂Ω such that u(x0) > u(x) for all x ∈ Ω. Moreover, assume that Ω satisfies an
interior ball condition at x0, i.e. there exists a ball
Ω
x0
yR
B(y , R) ⊂ Ω such that B(y , R) ∩Ω = x0
i.) If c = 0, then
∂u
∂ν(x0)
strict > 0 follows from assumption.
ii.) If c ≥ 0, then the same assertion holds if u(x0) ≥ 0
Proof:
Idea: Construct a barrier v ≥ 0, v > 0 in Ω,
w = u(x)− u(x0)∓ εv(x) ≤ 0 in B.
u(x)− u(x0) ≤ εv(x)
Construction of v(x): v(x) = e−αr2 − e−αR2
where r = |x − y |v(x) > 0 in B
v(x) = 0 on ∂B
∂v
∂xi= (−α)2(xi − yi) · e−αr
2
Lv = −∑
ai j∂i jv +∑
∂ivbi + cu
= −∑
4α2(xi − yi)(xj − yi)e−αr2
ai j
+∑(ai i2α− 2αbi(xi − yi)e−αr
2
+ c(e−αr2 − e−αR2
)
——
Ellipticity:∑
ai jξiξj ≥ λ0|ξ|2 ⇐⇒ −∑
ai jξiξj ≤ −λ0|ξ|2
——
≤ −λ04α2|x − y |2e−αr2
+ tr (ai j)2αe−αr2
+2α|~b| · |x − y |e−αr2
+ c(e−αr2 − e−αR2
)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
170Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
——[e−αr
2 − e−αR2
= e−αr2
(1− e−α(R2−r2))︸ ︷︷ ︸≤1
]
——
≤ e−αr2−λ04α2|x − y |2 + 2α tr (ai j) + 2α|b| · |x − y |+ |c |
If |x − y | bound from below, then Lv ≤ 0 if α is big enough.
∂Ω
x0
A
R2
B(y ,R)
A = B(y , R) ∩ B(x0,
R2
)
Then |x − y | ≥ R2 in A and Lv ≤ 0 for α big enough.
Define w(x) = u(x)− u(x0) + εv(x)
Lw = Lu︸︷︷︸≤0
−Lu(x0)︸ ︷︷ ︸c(x0)
+ε Lv︸︷︷︸≤0
≤ −cu(x0)
< 0
=⇒ w a subsolution.
w on ∂A: ∂A ∩ ∂B(y , R). w ≤ 0
∂A ∩ B(y , R): u(x0) > u(x) in Ω
=⇒ u(x)− u(x0) ≤ −δ, δ > 0 by compactness.
=⇒ w ≤ 0 if ε is small enough.
Lw ≤ 0 in B
w ≤ 0 on ∂B
Apply weak max principle to w .
maxBw ≤ max
∂Ωw+ = 0
=⇒ w(x) ≤ 0 for all x ∈ A.
w(x) = u(x)− u(x0) + εv(x)
w(x0) = 0 and u(x)− u(x0) ≤ −εv(x)Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.2 Maximum Principles for Elliptic Equations of 2nd Order171
=⇒ ∂u
∂ν(x0) ≥ −ε∂v
∂ν(x0)
∂v
∂xi= −2α(xi − yi)e−αr
2
∂u
∂ν(x0) ≥ −ε(−2α)(x0 − y)e−αR
2 x0 − x|x0 − y |
= 2ε · αe−αR2 |x0 − y |2|x0 − y |︸ ︷︷ ︸
=R>0
Theorem 7.2.11 (Strong maximum principle, c = 0): Ω open and connected (not neces-
sarily bounded), c = 0
i.) If Lu ≤ 0, and u attains max at an interior point, then u ≡ constant.
ii.) Lu ≥ 0, u has a minimum at an interior point, the u is constant.
Proof:
M = maxΩu, u 6= M,
V = x ∈ Ω | u(x) < M 6= ∅ (contradiction assump.)
x
Ω
ν
x0
V
Choose x0 ∈ V such that dist(x0, ∂V ) <dist(x, ∂Ω)
=⇒ B(x0, |x − x0|) ⊂ V . We may assume that B ∩ ∂V = x=⇒ Satisfy assumptions in the boundary point lemma.
=⇒ ∂u∂ν (x) > 0
Contradiction, since ∂u∂ν (x) = Du(x) · ν = 0 since u(x) = M, i.e. x is a maximum point of
u.
Theorem 7.2.12 (Strong maximum principle, c ≥ 0): Ω open and connected (not neces-
sarily bounded). u ∈ C2(Ω), c ≥ 0.
i.) Lu ≤ 0 in Ω and u attains a non-negative maximum, then u = constant in Ω
ii.) Lu ≥ 0 u has non-positive maximum then u ≡ constant in Ω.
Proof: Hopf’s Lemma for non-negative maximum point at the boundary.
Example: c = 1, −u′′ + u = 0. u = sinh, cosh
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
172Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7.3 Maximum Principles for Parabolic Equations
Definition 7.3.1: L(x, t,D) = ut + Au,
Au = −n∑
i ,j=1
ai j(x)∂i ju +
n∑
i=1
bi∂iu + cu,
is said to be parabolic. A is (uniformly) elliptic
n∑
i ,j=1
ai jξiξj ≥ λ0|ξ|2 ∀ξ ∈ Rn, λ0 > 0.
Remark: Solutions of elliptic equations are stationary states of parabolic equations =⇒Same assumptions concerning c .
Ω open, bounded, ΩT = Ω× (0, T ]
ΓT = ∂ΩT = Ω× t = 0 ∪ ∂Ω× [0, T )
Ω
ΩTΓT
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.3 Maximum Principles for Parabolic Equations173
7.3.1 Weak Maximum Principles
Theorem 7.3.2 (weak maximum principle, c = 0): u ∈ C21 (ΩT ) ∩ C(Ω), c = 0 in ΩT
i.) Lu = ut + Au ≤ 0 in ΩT , then
maxΩT
u = maxΓT
u
ii.) Lu ≥ 0,
minΩT
u = minΓTu
Proof:
i.) Suppose Lu < 0 and that the max is attained at x0, t0) in ΩT . 0 > Lu = ut + Au =
0 + (Au ≥ 0) ≥ 0
since Au ≥ 0 as in the elliptic case. contradiction.
If the max is attained at (x0, T ), 0 > Lu = ut + Au ≥ 0 since ut(x0, T ) ≥ 0 if u is
max at (x0, T )
If Lu ≤ 0, then take uε = u − εtLuε = (∂t + A)(u − εt) = Lu − ε ≤ −ε < 0
uε satisfies Luε < 0, and then
maxΩT
uε = maxΓT
uε,
∀ε > 0, and the assertion follows as ε 0.
Theorem 7.3.3 (weak maximum principle, c ≥ 0): u ∈ C21 (ΩT ) ∩ C(ΩT ), c ≥ 0 in ΩT .
u = u+ − u−, where u+ = max(u, 0)
u− = −min(u, 0)
i.) If Lu = ut + Au ≤ 0,
maxΩT
u ≤ maxΓT
u+
ii.) Lu ≥ 0, then
minΩT
u ≥ −maxΓT
u−.
Proof: Suppose first Lu < 0 and that u has a positive maximum at (x0, t0) in ΩT .
0 > Lu = (∂t + A)u
≥ c(x0, t0)u(x0, t0) > 0 contradiction
Similar for t0 = T
General case: uε = u − εtLuε = (∂t + A)(u − εt)
≤ 0− ε− ε c(x, t)︸ ︷︷ ︸≥0
t
≤ −ε < 0
Previous sept applies. If u has a positive max, then uε has a positive max in ΩT for ε small
enough.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
174Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7.3.2 Strong Maximum Principles
Evans: Harnack inequality for parabolic equations
Protter, Weinberger: Maximum principles in differential equations, some form of Hopf’s
Lemma.
Theorem 7.3.4 (): Ω bounded, open, smooth boundary and u ∈ C21 (ΩT ) ∩ C(ΩT ) with
Lu = ut + Au ≤ 0. Suppose that either
i.) c = 0
ii.) c ≥ 0,
maxΩT
u = M ≥ 0
The the strong maximum principle holds.
Idea of Proof: Ω = (α, β). n = 1, ut − auxx + bux + cu = L, a = a(x, t) ≥ λ0 > 0
Step 1: B = Bε(ε, τ) open ball in ΩT , u < M in Bε, u(x0, t0) = M, (x0, t0) ∈ ∂Bε=⇒ x0 = ξ
ξ
T
x0(x0, t0)
x0
β
τ
α
The proof (by contradiction) uses the weak max principle for w = u − M + ηv , where
v(x, t) = e−Ar2 − e−Aε2
, A > 0. Compute
Lv = e−Ar2−α4A2(x − ξ)2 + . . .
∴ r2 = (x − ξ)2 + (t − τ)2
If the max is attained at a point (x, t0) with x0 6= ξ, choose δ small enough such that
|x − ξ| > c(δ) > 0 ∀x ∈ Bδ(x0)
w subsolution, w < 0 on ∂Bδ, w(x0, t0) = 0. Contradiction with weak max principle.
Step 2: Suppose u(x0, t0) < M with t0 ∈ (0, T ). Then u(x, t0) < M for all x ∈ (α, β).
Idea: Suppose ∃ x1 ∈ (α, β0) with u(x, t0) = M and u(x, t0) < M for x ∈ (x0, x1)
d(x) =distance to point (x, t0)
from the set u = M d(x1) = 0, d(x0) > 0
d(x + h) ≤√d2(x0) + h2
d(x0) ≤√d2(x0 + h) + h2
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.3 Maximum Principles for Parabolic Equations175
(x0, t0)
βα x0 x1
u(x1, t0) = M
x
u(x0, t0 + d(x0)) = M
x0 x1
h
x
u(x0, t0 − d(x0)) = M
Suppose d is differentiable.
Rewrite these inequalities as
√d2(x)− h2 ≤ d(x + h) ≤
√d2(x) + h2
√d2(x)− h2 − d(x)
h≤ d(x + h)− d(x)
h
≤√d2(x) + h2 − d(x)
h
As h → 0, d ′(x) = 0 =⇒ d = constant
Contradiction, since d(x0) > 0, d(x1) = 0.
Step 3: Suppose u(x, t) < M for α < x < β, 0 < t < t1, t1 ≤ T , then u(x, t1) < M
for x ∈ (α, β)
Proof: Weak maximum principle for w(x, t) = u−M+ηv , where v(x, t) = exp(−(x−x1)2−A(t − t1))− 1
To conclude that ut > 0 Lu ≤ 0 subsolution.
Lu = ut︸︷︷︸>0
− auxx︸︷︷︸≥0
+ bux︸︷︷︸=0
+ cu︸︷︷︸c(x, t) ·M︸ ︷︷ ︸
≥0
at (xi , t1)
> 0. contradiction.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
176Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
u < M
T
βα
t1
(x1, t1)
(x − x1)2 − A(t − t1) = 0
(x0, t0)M
t1
L
Conclusion. Suppose that u(x0, t0) = M, (x0, t0) ∈ ΩT
To show if u(x0, t0) = M = max, then u ≡ M on Ωt0
We can not conclude that u ≡ M on ΩT .
Consider the line L(x0)− (x0, t), 0 ≤ t < t0Suppose there is a t1 ≤ 0 such that u(x0, t) < M for t < t1, u(x0, t1) = M
=⇒ u(x, t) < M for all (x, t) in (α, β)× (0, t1)
=⇒ Step 3: u(x0, t) < 0. Contradiction.
Modify the function v in n dimensions; the same argument works.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
7.4 Exercises177
7.4 Exercises
7.1: [Qualifying exam 08/90] Consider the hyperbolic equation
Lu = uyy − e2xuxx = 0 in R2.
a) Find the characteristic curves through the point(
0, 12
).
b) Suppose that u is a smooth of the initial value problem
uyy − e2xuxx = 0 in R2,
u(x, 0) = f (x) for x ∈ R,uy (x, 0) = g(x) for x ∈ R.
Show that the characteristic curves bound the domain of dependence for u at the point(0, 1
2
). To do this, prove that u
(0, 1
2
)= 0 provided f and g are zero on the intersection
of the domain of dependence with the x-axis. This can be done by defining the energy
e(y) =1
2
∫ h(y)
g(y)
(u2x + e−2xu2
y ) dx
where (g(y), h(y)) is the intersection of the domain of dependence with the parallel
to the x-axis through the point (0, y). Then show that e ′(y) ≤ 0. (See also proof for
the domain of dependence for the wave equation with energy methods.)
7.2: [Evans 7.5 #6] Suppose that g is smooth and that u is a smooth solution of
ut − ∆u + cu = 0 in Ω× (0,∞),
u = 0 on ∂Ω× [0,∞),
u = g on Ω× t = 0,
and that the function c satisfies c ≥ γ > 0. Prove the decay rate
|u(x, t)| ≤ Ce−γt for (x, t) ∈ Ω× (0,∞).
Hint: The solution of an ODE can also be viewed as a solution of a PDE!
7.3: [Evans 7.5 #7] Suppose that g is smooth and that u is a smooth solution of
ut − ∆u + cu = 0 in Ω× (0,∞),
u = 0 on ∂Ω× [0,∞),
u = g on Ω× t = 0.
Moreover, assume that c is bounded, but not necessarily nonnegative. Show that u ≥ 0.
Hint: What PDE does v(x, t) = eλtu(x, t) solve?
7.4: [Qualifying exam 08/01] Let Ω be a bounded domain in Rn with a smooth bound-
ary. Use
a) a sharp version of the Maximum principle
b) and an energy method
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
178Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
to show that the only smooth solutions of the boundary-value problem
∆u = 0 in Ω,
∂nu = 0 on ∂Ω
(where ∂n is the derivative in the outward normal direction) have the form u = constant.
7.5: [Qualifying exam 01/02] Let B be the unit ball in Rn and let u be a smooth func-
tion satisfying
ut − ∆u + u1/2 = 0, 0 ≤ u ≤ M on B × (0,∞),
ur = 0, on ∂B × (0,∞).
Here M is a positive number and ur is the radial derivative on ∂B. Prove that there is a
number T depending only on M such that u = 0 on B × (T,∞).
Hint: Let v be the solution of the initial value problem
dv
dt+ v1/2 = 0, v(0) = M,
and consider the function w = v − u.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Part II
Functional AnalyticMethods in PDE
Ver
sio
n::
31
10
20
11
180Chapter 8: Facts from Functional Analysis and Measure Theory
8Facts from Functional Analysis and MeasureTheory
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
8.1 Introduction 182
8.2 Banach Spaces and Linear Operators 183
8.3 Fredholm Alternative 187
8.4 Spectral Theorem for Compact Operators 191
8.5 Dual Spaces and Adjoint Operators 195
8.6 Hilbert Spaces 196
8.7 Weak Convergence and Compactness; the Banach-Alaoglu Theorem 201
8.8 Key Words from Measure Theory 204
8.9 Exercises 209
8.1 Introduction
This chapter represents what, I believe, to be the most pedagogically challenging material
in this book. In order to break into modern theory of PDE, a certain amount of functional
analysis and measure theory must be understood; and that is what the material in this chap-
ter is meant to convey. Even though I’ve made every effort to keep the following grounded
in PDE via remarks and examples, it is understandable how the reader might regard some
parts of the chapter as unmotivated. Unfortunately, I believe this to be inevitable as there
is so much background material that needs to be “water over the back” to have a decent
understanding of modern methods in PDE. In the next chapter we will start using the tools
learned here to get some actual results in the context of linear elliptic and parabolic equations.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.2 Banach Spaces and Linear Operators181
In regard to organization, I have tried to order the material in this chapter as sequentially
as possible, i.e. material conveyed can be understood based off of material in the previous
section. In addition important theorems and their consequence are presented throughout
as soon as possible. I decided to focus on these two principles in hopes that the ordering
or material will directly indicate to what generality theorems and ideas can be applied. For
example, the validity of a given theorem in a Banach space as opposed to Hilbert spaces will
hopefully be made clear by the order in which the material is presented. Lastly, for the sake
of completeness, many proofs have been included in this chapter which can (and should) be
skipped on the first reading of this book, as many of these dive into rather pedantic points
of real and functional analysis.
8.2 Banach Spaces and Linear Operators
Philosophically, one of the major goals of modern PDE is to ascertain properties of solu-
tions when equations are not able to be solved analytically. These properties precipitate out
of classifying solutions by seeing whether or not they belong to certain abstract spaces of
functions. These abstractions of vector spaces are called Banach Spaces (defined below),
and they have many properties and characteristics that make them particularly useful for the
study of PDE.
To begin, take X to be a topological vector space (TVS) over R (C). Recalling that a
TVS is a linear space, this means that X satisfies the typical vector space axioms pertaining
to scalars in R (C) and has a topology corresponding to the continuity of vector addition and
scalar multiplication. If one is uncertain of this concept refer to [Rud91] for more details.
Definition 8.2.1: A norm is any mapping X → R, denoted ||x || for x ∈ X, having the
following properties:
i.) ||x || ≥ 0, ∀x ∈ X;
ii.) ||αx || = |α| · ||x ||, ∀α ∈ R, x ∈ X;
iii.) ||x + y || ≤ ||x ||+ ||y ||, ∀x, y ∈ X.
Remark: iii.) is usually referred to as the triangle inequality .
Definition 8.2.2: A TVS, X, coupled with a given norm defined on X, namely (X, || · ||), is
a normed linear space (NLS).
Definition 8.2.3: Given an NLS (X, || · ||), we define the following
• A metric is a mapping ρ : X ×X → R defined by ρ(x, y) = ||x − y ||.
• An open ball, denoted B(x, r), is a subset of X characterized by B(x, r) = y ∈ X :
ρ(x, y) < r.
• If U ⊂ X is such that ∀x ∈ U, ∃r > 0 with B(x, r) ⊂ U, then U is open in X.
Definition 8.2.4: A NLS X is said to be complete, if every Cauchy sequence in X has a limit
in X. Moreover,
• a complete NLS is called a Banach Space (BS);
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
182Chapter 8: Facts from Functional Analysis and Measure Theory
• A BS is separable if it contains a countable dense subset.
One can refer to either [Roy88] or [Rud91], if they are having a hard time understanding
any of the above definitions. To solidify these definitions, let us consider a few examples.
Example 4: i.) A classic example of a BS is C0([−1, 1]) = f : [−1, 1]→ R, continuouswith the norm ||f ||
C0[−1,1]:= maxx∈[−1,1] |f (x)|. (Showing completeness of this space
is a good exercise in real analysis. See [Roy88] if you need help.) Of course, C0 under
this norm is complete over any compact subset of Rn.
ii.) A non-example of a BS is X = f : [−1, 1] → R, f polynomial. Indeed X is not
complete as any function has an approximating sequence of Taylor polynomials that
converges (in the sense of Cauchy) in all norms, hence X is not complete. One may try
to argue that this statement is false considering the discrete map (||f || = 1 if f/≡ 0 and
||f || = 0 otherwise), but the discrete map is not a valid norm as it violates Definition
(8.2.1.ii).
iii.) Another classic example of a BS from real analysis is the space lp = a = (a1, a2, a3, . . .)∑∞i=1 |ai |p <
∞ 1 ≤ p <∞ with the norm
||a||lp
=
∞∑
i=1
|ai |p.
Again, showing this space is complete is another good exercise in real analysis.
Moving on, we now start to consider certain types of mapping on our newly defined
spaces.
Definition 8.2.5: Given X and Y are both NLS, T : X → Y is called a contraction if ∃θ < 1
such that
||Tx − Ty ||Y≤ θ||x − y ||
X, ∀x, y ∈ X.
A special case of the above definition is when X and Y are the same NLS. If we go further
and consider T : X → X, where X is a BS, then we have the following extremely powerful
theorem.
Theorem 8.2.6 (): [Banach’s fixed point theorem] If X is a BS and T a contraction on X,
then T has a unique fixed point in X; i.e. ∃!x ∈ X such that Tx = x . Moreover, for any
y ∈ X, l imi→∞T i(y) = x .Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.2 Banach Spaces and Linear Operators183
Proof : Consider a given point x0 ∈ X and define the sequence xi = T i(x0). Then, with
δ := ||x0 − Tx0||X = ||x0 − x1||X , we have
||x0 − xk ||X ≤k∑
i=1
||xi−1 − xi ||X
≤∞∑
i=1
θi−1||x0 − x1||X
= δ ·∞∑
i=1
θi−1
=δ
1− θ .
Also, for m ≥ n,
||xn − xm||X ≤ θ · ||xn−1 − xm−1||X ≤ θ2 · ||xn−2 − xm−2||X≤ · · · ≤ θn · ||x0 − xm−n||X ≤ δ ·
θn
1− θwhich tends to 0 as n → ∞, since θ < 1. Thus, xi is Cauchy. Put limi→∞(xi) =: x ∈ X,
then we see that Tx = limi→∞ Txi = limi→∞ xi+1 = x .
To finish the proof at hand, we need show uniqueness. Assume that w is another fixed
point of T , then
||x − w ||X = ||Tx − Tw ||X ≤ θ||x − w ||X .
Since θ < 1, this implies that ||x − w ||X
= 0 and hence x = w .
An immediate application of the above theorem is the proof of the Picard-Lindelof The-
orem for ODE, which asserts the existence of first order ODE under a Lipschitz condition
on any inhomogeneity. See [TP85] for the theorem and proof.
Continuing our exposition on properties of BS operators, we make the following definition.
Definition 8.2.7: Take X and Y as NLS. T : X → Y is said to be bounded if
||T || = supx∈X,x 6=0
||Tx ||Y
||x ||X
<∞. (8.1)
From (8.1), we readily verify that the space of all bounded linear mapping from X to Y ,
denoted L(X, Y ), forms a vector space. Also, concerning the boundedness of BS operators
we have the following theorem from functional analysis.
Theorem 8.2.8 (): Take X and Y to be BS. If X is infinite dimensional, then a map T is
linear and bounded if and only if it is continuous. On the other hand, if X is finite dimensional,
then T is continuous if and only if T is linear.
Proof : See [Rud91].
We now are in a position to state another very powerful idea that is useful for proving
existence of solutions for linear PDE.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
184Chapter 8: Facts from Functional Analysis and Measure Theory
Theorem 8.2.9 (Method of Continuity): If X is a BS, Y a NLS, with L0 and L1 bounded
linear operators mapping X → Y such that exists a C > 0 satisfying the a priori estimate
||x ||X≤ C||Ltx ||Y ∀t ∈ [0, 1], (8.2)
where Lt := (1− t)L0 + tL1 and t ∈ [0, 1], then L1 maps X onto Y ⇐⇒ L0 map X onto
Y .
Proof : Suppose that there exists an s ∈ [0, 1] such that Ls is onto. (8.2) indicates that
Ls is also one-to-one; and hence, L−1s exists with ||L−1
s || ≤ C, where C is the same constant
in (8.2). Now fix an arbitrary y ∈ Y and t ∈ [0, 1]. We now seek to prove that exists x ∈ Xsuch that Lt(x) = y , i.e. Lt is onto Y . It is clear that Lt(x) = y if and only if
Ls(x) = Ltx + Lsx − Ltx= y + (1− s)L0 + sL1 − [(1− t)L0 + tL1](x)
= y + (t − s)[L0 − L1](x).
Upon rewriting this equality, we see
x = L−1s y + (t − s)L−1
s (L0 − L1)x︸ ︷︷ ︸Tx
. (8.3)
So, now we claim that ∃x ∈ X such that (8.3) is satisfied. This is equivalent to showing T
is a contraction mapping via Theorem 8.2.6. Taking x1, x2 ∈ X, we have
||Tx1 − Tx2||Y = ||(t − s)L−1s (L0 − L1)(x1 − x2)||
Y
≤ |t − s| · ||L−1s ||︸ ︷︷ ︸≤C
· ||L0 − L1||︸ ︷︷ ︸≤ 2C
·||x1 − x2||X .
Thus, if we consider t such that
|t − s| < 1
2C2,
we get that T is a contraction; and hence, Lt is onto. Now, we can cover [0, 1] with open
intervals of length C−2 to conclude that our original assumption (there existing an s ∈ [0, 1]
where Ls is onto) leads to Lt being onto ∀t ∈ [0, 1].
The method of continuity essentially falls under the general category of existence theorems.
While not particular good for explicit calculations, this theorem is very handy for proving,
well, existence. The following is a nice exercise that demonstrates this.
Exercise: Prove the existence of solution of the equation −(φu′)′ + u = f .
Hints: Consider the mappings
L0(u) = −u′′, L0(u) = f
L1(u) = −(φ · u′)′ + u, L1(u) = f ,
and note that L0 is onto (just integrate the equation for L0).Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.3 Fredholm Alternative185
8.3 Fredholm Alternative
In this section, one of the most important tools in the study of PDE will be presented, namely
the Fredholm Alternative. To begin the exposition, let us recall a fact from linear algebra.
Fact: If L : Rn → Rn is linear, the mapping L is onto and one-to-one (and hence, in-
vertible) if the kernel is trivial i.e. Ker(L) = x : Lx = 0 = 0.
It turns out in general that this is not true, unless you have a compact perturbation of
the identity map. The Fredholm Alternative can be viewed as a BS analogue of the above
fact; but before going on to that theorem, let us define what a compact operator is.
Definition 8.3.1: Consider X and Y NLS with a bounded linear map T : X → Y . T is said
to be compact if T maps bounded sets in X into relatively compact sets in Y .
Sequentially speaking, the above statement is equivalent to saying that T is compact if
it maps bounded sequences in X into sequences in Y that contain converging subsequences.
Not only is the next lemma (due to Riesz) used in the proof of the Fredholm Alternative,
but it is also a very important property of NLS in its own right.
Lemma 8.3.2 (Almost orthogonal element): If X is a NLS, Y a closed subspace of X and
Y 6= X. Then ∀θ < 1, ∃xθ ∈ X satisfying ||xθ|| = 1, such that dist(xθ, Y ) ≥ θ.
Proof: First, we fix arbitary x ∈ X \ Y . Since Y is closed,
dist(x, Y ) = infx∈Y||x − y || = d > 0.
Thus, ∃yθ ∈ Y , such that
||x − yθ|| ≤d
θ.
Now, we define
xθ =x − yθ||x − yθ||
;
it is clear that ||xθ|| = 1. Moreover, for any y ∈ Y ,
||xθ − y || =||x − [yθ − ||yθ − x ||y ]||
||yθ − x ||
≥ d
||yθ − x ||≥ θ,
where the second inequality is based on the fact that yθ − ||yθ − x ||y ∈ Y since Y is a proper
subspace of X.
Corollary 8.3.3 (): If X is an infinite dimensional NLS, then B1(0) is not precompact in X.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
186Chapter 8: Facts from Functional Analysis and Measure Theory
Proof: The statement of the corollary is equivalent to saying there exists bounded se-
quences in B1(0) that have no converging subsequences. To see this, we argue directly by
making the following construction. For a given infinite dimensional BS X, pick an arbitrary
element x0 ∈ B1(0) ⊂ X. Since Span(x0) is a proper subspace of X, we know exists by
lemma 8.3.2 x1 ∈ X such that dist(x1,Span(x0)) ≥ 12 with ||x1|| = 1 =⇒ x1 ∈ B1(0). By
the same argument, we pick x2 ∈ X such that dist(x2,Span(x0, x1)) ≥ 12 with x2 ∈ B1(0).
Repeating iteratively, we now have constructed a sequence xi, for which there is clearly
no converging subsequence in X (the distance between any two elements is greater than12 ).
The above corollary indicates that compactness will be a big consideration in PDE anal-
ysis: many of the arguments forthcoming are based on limit principles, which the above
corollary presents a major obstacle. Thus, ideas of compactness of operators (and function
space imbeddings; more later) play a major role in PDE theory, as compactness “doesn’t
come for free” just because we work in a BS.
Now, we are finally in a position to present the objective of this section.
Theorem 8.3.4 (): [Fredholm Alternative] If X is a NLS and T : X → X is bounded, linear
and compact, then
i.) either the homogeneous equation has a non-trivial solution x−Tx = 0, i.e. Ker(I−T )
is non-trivial or
ii.) for each y ∈ X, the equation x − Tx = y has a unique solution x ∈ X. Moreover,
(I − T )−1 is bounded.
Proof 1: [GT01] For clarity the proof will be presented in four steps.
Step 1: Claim: Defining S := I − T , we claim there exists a constant C such that
dist(x,Ker(S)) ≤ C||S(x)|| ∀x ∈ X. (8.4)
Proof of Claim: We will argue indirectly by contradiction. Suppose the claim is
false, then there exists a sequence xn ⊂ X satisfying ||S(xn)|| = 1 and dn :=
dist(xn,Ker(S))→∞. Upon extracting a subsequence we can insure that dn ≤ dn+1.
Choosing a sequence yn ⊂ Ker(S) such that dn ≤ ||xn − yn|| ≤ 2dn, we see that if
we define
zn :=xn − yn||xn − yn||
,
then we have ||zn|| = 1. Moreover,
||S(zn)|| =
∣∣∣∣∣∣∣∣S(xn)− S(yn)
||xn − yn||
∣∣∣∣∣∣∣∣
=
∣∣∣∣∣∣∣∣
1− 0
||xn − yn||
∣∣∣∣∣∣∣∣
≤ 1
dn.
1Can be omitted on first reading.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.3 Fredholm Alternative187
Thus, the sequence S(zn) → 0 as n →∞. Now acknowledging that T is compact,
by passing to a subsequence if necessary, we may assume that the sequence Tznconverges to an element w0 ∈ X. Since zn = (S + T )zn, we then also have
limn→∞
zn = limn→∞
(S + T )zn
= 0 + w0 = w0.
From this, we see that T (w0) = w0, i.e. zn → w0 ∈ Ker(S). However this leads to a
contradiction as
dist (zn,Ker(S)) = infy∈Ker(S)
||zn − y ||
= ||xn − yn||−1 infy∈Ker(S)
||xn − yn − ||xn − yn||y ||||xn − yn||
= ||xn − yn||−1dist (xn,Ker(S)) ≥ 1
2.
Step 2: We now wish to show that the range of S, R(S) := S(X), is closed in X. This is
equivalent to showing an arbitrary sequence in X, call it xn, is mapped to another
sequence with limit S(x) = y ∈ X, where x has to be shown to be in X.
Now consider the sequence dn defined by dn := dist(xn,Ker(S)). By (8.4) we
know that dn is bounded. Choosing yn ∈ Ker(S) such that dn ≤ ||xn − yn|| ≤ 2dn,
we see that the sequence defined by wn = xn − yn is also bounded. Thus, by the
compactness of T , T (wn) converges to some z0 ∈ X, upon passing to subsequence if
necessary. We also have S(wn) = S(xn) − S(yn) = S(xn) → y by definition. These
last two observations, combined with the fact that wn = (S+T )wn, leads us to notice
that wn → y + z0. Finally, we have that
limn→∞
S(wn) = limn→∞
S(xn − yn)
=⇒ S(y + z0) = y ,
where the limit on the RHS is justified by the continuity of S (see theorem (8.3.4)).
Since y + z0 ∈ X, we conclude that R(S) is indeed closed.
Step 3: We now will shot that if Ker(S) = 0 then R(S) = X, i.e. if case i.) of the theorem
does not hold then case ii.) is true.
To start we make the following. First we define the following series of spaces S(X)j :=
Sj(X). Now, it is easy to see that
S(X) ⊂ X =⇒ S2(X) ⊂ S(X) =⇒ · · · =⇒ Sn(X) ⊂ Sn−1(X).
This combined with the conclusion of Step 2, indicates that S(X)j constitute a set of
closed, non-expanding subspaces of X. Now, we make the following
Claim: There exists K such that for all i , j > K, S(X)i = S(X)j .
Proof of Claim: Suppose the claim is false, thus we can construct a sequence based on
the following rule that xn+1 ∈ Sn+1 with dist(xn+1, S(X)n) ≥ 12 , ||xn|| = 1 and x0 ∈ X
is chosen arbitrarily. So, let us now take n > m and consider
Txn − Txm = xn + (−xm + S(xm)− S(xn))
= xn + x,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
188Chapter 8: Facts from Functional Analysis and Measure Theory
where x ∈ S(X)m+1. From this equation, we see that
||Txn − Txm|| ≥1
2(8.5)
by construction of our sequence. (8.5) contradicts the fact that T is a compact oper-
ator; our claim is proved.
As of yet, we still have not used the supposition that Ker(S) = 0. Utilizing our
claim, take k > K so that S(X)k = S(X)k+1. So, taking arbitrary y ∈ X, we see that
Sk(y) ∈ S(X)k = S(X)k+1. Thus, ∃x ∈ S(X)k+1 with Sk(y) = Sk+1(x). This leads
to
Sk(S(x)− y) = 0 =⇒ S(x)− y = S−k(0);
but since Ker(S) = 0, S−k(0) = 0. We now can conclude that S(x) = y , and since
y ∈ X was arbitrary we get the result: R(S) = X.
Step 4: We lastly seek to prove that if R(S) = X then Ker(S) = 0.
Proof of Step 4: This time we define a non-decreasing sequence of closed subspaces
Ker(S)j by setting Ker(S)j = S−j(0). This is indeed a non-decreasing sequence as
0 ∈ Ker(S) = S−1(0)
=⇒ S−1(0) ⊂ S−2(0)...
...
=⇒ S−k ⊂ S−(k+1)(0).
Also, Ker(S)j are closed due to the continuity of S. By employing an analogous
argument based on lemma 8.3.2 to that used in Step 3., we obtain that ∃l such that
Ker(S)j = Ker(S)l for all j ≥ l . Now, since R(S) = X =⇒ R(Sl) = X, ∃x ∈ Xsuch that for any arbitrary element y ∈ Ker(S)l , y = Slx is satisfied. Consequently
S2l(x) = Sl(y) = 0, which implies x ∈ Ker(S)2l = Ker(S)l whence
y = Slx = 0.
The result follows from seeing that Ker(S) ⊂ Ker(S)l = 0, which is indicated by
the preceding equation and the non-decreasing nature of Ker(S)l .
The boundedness of the operator S−1 = (I − T )−1 in case ii.) follows from (8.4) with
Ker(S) = 0. This completes the entire proof, whew!
To conclude this section, we will consider a few examples of compact and non-compact
operators.
Example 5: If T : X → Y is continuous and Y finite dimensional, then T is compact. It is
left to the reader to verify this, as it is a straight-forward calculation in real analysis.
Example 6: If X = C1([0, 1]), Y = C0([0, 1]), and T = ddx , then T not a compact operator.
If we look at
||Tu||C0([0,1])
≤ C||u||C0([0,1])
Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.4 Spectral Theorem for Compact Operators189
We see that this can’t hold ∀u ∈ X and ∀x ∈ [0, 1]. Just look at u =√x . Thus, T is not
even bounded; and hence, it is not compact. This indicates why taking derivatives is a “bad”
operation.
Example 7: Take X = C0([0, 1]), Y = C0([0, 1]), and Tu =∫ x
0 u(s) ds. Now, take uj to
be a bounded sequence in C0([0, 1]) (say it’s bounded by a constant C). Thus, we have that
vj = Tuj is also bounded in C0; in fact, it is easy to see that vj ≤ C. Now, we use the
following to show T compact:
Arzela-Ascoli Theorem: If uj is uniformly bounded and equicontinuous in a NLS X, then
exists a converging subsequence in X. (We will come back to this theorem later.)
So, we look at
|vj(x)− vj(y)| =
∣∣∣∣∫ y
x
v ′j (s) ds
∣∣∣∣ ≤ |x − y | · sup |uj | ≤ C|x − y |
Thus, we see that vj is equicontinuous. By applying Arzela-Ascoli, we find that vj does indeed
contain a converging subsequence in C0([0, 1]). We now conclude that T is compact.
8.4 Spectral Theorem for Compact Operators
In this section we will start investigations into the spectra of compact operators. First, for
completness, we recall the following elementary definitions.
Definition 8.4.1: Take X to be a NLS and T ∈ L(X,X). The resolvent set, ρ(T ), is the
set of all λ ∈ C such that (λI − T ) is one-to-one and onto. In addition, Rλ = (λI − T )−1 is
the resolvent operator assocaited with T .
From the Fredholm Alternative in the previous section, we see that Rλ is a bounded linear
map for λ ∈ ρ(T ).
Definition 8.4.2: Take X to be a NLS and T ∈ L(X,X). λ ∈ C is called an eigenvalue
of T if there exists a x 6= 0 such that T (xλ) = λxλ. The x in this equation is called the
eigenvector associated with λ. The set of eigenvalues of T is called the spectrum, denoted
σ(T ). Also, we define
• Nλ = x : Tx = λx is the eigenspace.
• mλ = dim(Nλ) is the multiplicity of λ.
Warning: It is not necessarily true that [ρ(T )]C = σ(T ).
The next lemma is a simple result from real analysis that will be important in the proof
of the main theorem of this section.
Lemma 8.4.3 (): If a set U ⊂ Rn (Cn) is bounded and has only a finite number of accumu-
lation points, then U is at most countable.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
190Chapter 8: Facts from Functional Analysis and Measure Theory
are elements of U
Qr
O
x0r
R
Figure 8.1: Layout for lemma 8.4.3, note that U is only approximated and x0 is not necessarily
contained in U.
Proof 1: Without lose of generality we consider U ⊂ C with only one accumulation point
x0 (the argument doesn’t change for the other cases).
Since U is bounded, there exists 0 < R < ∞ such that U ⊂ BR(0). Since BR(0) is open,
there exists r > 0 such that Br (x0) ⊂ BR(0); define Qr := BR(0) \ Br (x0). To proceed, we
make the following
Claim: Qr contains only a finite number of points of U.
Suppose not, then Qr ∩ U is at least countably infinite. Since Qr is closed and bounded, it
is compact (Heine-Borel). Thus, we cover Qr with a finite number of balls with radius 1. At
least one of these balls, call it B∗1, is such that B∗1 ∩ U is at least countably infinite. Conse-
quently, we define V1 := B∗1 ∩Qr . In a similar way, we cover V1 with a finite number of balls
with radius 12 , where B∗1/2 is one of these balls with B∗1/2∩U being at least countably infinite.
In analogy with V1, we define V2 := B∗1/2∩ V1. We thus iterativly construct a series of closed
sets Vn := B∗1/n∩ Vn−1. By construction, we have Vn ⊂ Vm for m < n with diam(Vn) ≤ 1
n .
We now see that ∃x ∈ U such that
x ∈∞⋂
n=1
Vn,
by the compactness of Vn; but this implies that x is an accumulation point of V , since ev-
ery Vn ∩ U is at least countably infinite. Hence we have a contradiction, which proves the
claim.
Now, it is clear that
BR(0) = x0 ∪[ ∞⋃
n=1
Qr/n
].
Thus,
U ⊂ x0 ∪∞⋃
n=1
(U ∩Qr/n), (8.6)
1Can be omitted on first reading.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.4 Spectral Theorem for Compact Operators191
since U ⊂ BR(0). Since the RHS of (8.6) is countable, so is U.
We now are in a position to state the main result of the section.
Theorem 8.4.4 (Spectral Theorem for Compact Operators): If X is a NLS with T ∈L(X,X) compact, then σ(T ) is at most countably infinite with λ = 0 as the only possible
accumulation point. Moreover, the multiplicity of each eigenvalue is finite.
Proof: It is easily verified that eigenvectors associated with different eigenvalues are
linearly independent. With that in mind, we can thus extract a series of λj (not necessarily
distinct) from σ(T ) with associated eigenvectors xj such that Txj = λjxj and x1, . . . , xn is
linearly independent ∀n. Now we define Mn := Spanx1, . . . , xn. Since Tx = λx , we see
that
||T || ≥ ||Tx ||||x || = λ;
hence σ(T ) is bounded.
To prove that λ = 0 is the only possible accumulation point, we will argue indirectly by as-
suming that λ 6= 0 is an accumulation point of λj. To do this, we first utilize lemma (8.4.3)
to construct a sequence yn ⊂ X such that yn ∈ Mn, ||yn|| = 1 with dist(yn,Mn−1) ≥ 12 .
We now wish to prove
∣∣∣∣∣∣∣∣
1
λnTyn −
1
λmTym
∣∣∣∣∣∣∣∣ ≥
1
2(8.7)
for n > m. If this is indeed true and the accumulation point of λj isn’t 0, then we get a
contradiction as ||yn|| is bounded, which implies Tyn contains a convergent subsequence as
T is compact (an impossibility in the face of (8.7)). So, let use prove (8.7).
Given yn ∈ Mn = Spanx1, . . . , xn, we can write
yn =
n∑
j=1
βjxj
for some βj ∈ C. Now, we calculate
yn −1
λnTyn =
n∑
j=1
(βjxj −
1
λnβjTxj
)
=
n∑
j=1
(βjxj −
βjλjλn
xj
)
=
n−1∑
j=1
(βjxj −
βjλjλn
xj
)∈ Mn−1.
Thus, 1λmTym ∈ Mn−1 since m < n. It is convenient to now define
n−1∑
j=1
(βjxj −
βjλjλn
xj
)− 1
λmTym =: z ∈ Mn−1.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
192Chapter 8: Facts from Functional Analysis and Measure Theory
Finally, looking at (8.7), we see
∣∣∣∣∣∣∣∣
1
λnTyn −
1
λmTym
∣∣∣∣∣∣∣∣ = ||yn − z ||
≥ infω∈Mn−1
||yn − ω||
= dist(yn,Mn−1)
≥ 1
2.
This proves the claim; and hence, λj can only have 0 as an accumulation point by the
aforementioned contradiction. Applying this conclusion to lemma 8.4.3, we attain the full
theorem.
8.5 Dual Spaces and Adjoint Operators
In addition to the rich structure that we have already seen in BS, it turns out that bounded
linear mappings on BS also have some very nice properties that will be very useful in our
studies of linear PDE. First, we make a formal definition.
Definition 8.5.1: If X is a NLS, we define the dual space, X∗, as the set of all bounded
linear mappings taking X → R.
The following theorem is easily verified.
Theorem 8.5.2 (): If X is a NLS, X∗ is a BS with
||f ||X∗ = sup
x 6=0
|f (x)|||x ||
X
.
Notation: If x ∈ X and f ∈ X∗, then by definition f (x) ∈ R. Typically, since X∗ won’t
always be a space of functions, we write 〈f , x〉 := f (x) ∈ R.
Since X∗ is a BS in its own right, we state the following
Definition 8.5.3: If X is a NLS, we define the bidual space of X as X∗∗ := (X∗)∗. Moreover,
there is a canonical imbedding j : X → X∗∗ defined by 〈j(x), f 〉 = 〈f , x〉 for all x ∈ X and
f ∈ X∗. If the canonical imbedding is onto, then X is a reflexive BS.
Linear operators between BS also have an analogy in dual spaces.
Definition 8.5.4: If X, Y are BS and T ∈ L(X, Y ), the adjoint operator T ∗ : Y ∗ → X∗ is
defined by
〈T ∗g, x〉 = 〈g, T x〉 ∀x ∈ X, ∀g ∈ Y ∗
From this it is easy to see that T ∗ ∈ L(Y ∗, X∗).
Before we can elucidate some nice properties of these adjoints, we require a little more
“structure” beyond that stipulated in the definition of a BS. Thus, we move onto the next
section.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.6 Hilbert Spaces193
8.6 Hilbert Spaces
Even though BS and their associated operators have a nice mathematical structure, they
lack constructions that would enable one to speak of “geometry”. As geometric structures
yield useful results in Euclidean space (inequalities, angle relations, etc.), we are motivated to
construct geometric analogies on the linear spaces (not necessarily BS) of previous sections.
This nebulous reference to “geometry” on a linear space (LS) will be clarified shortly.
Definition 8.6.1: Given X is an LS, a map (·, ·) : X ×X → R is called a scalar product if
i.) (x, y) = (y , x),
ii.) (λ1x1 + λ2x2, y) = λ1(x1, y) + λ2(x2, y), and
iii.) (x, x) > 0 ∀x 6= 0
for all x1, x2, y ∈ X and λ1, λ2 ∈ R.
Correspondingly, we now define
Definition 8.6.2: A LS with a scalar product is called a Scalar Product Space (SPS). It is
called a Hilbert Space (HS) if it is complete.
Warning: Do not assume that a scalar product is the same as taking an element of a
BS with it’s dual, i.e. “〈·, ·〉 6= (·, ·)”. Actually, this comparison doesn’t even make sense
as a scalar product has both arguments in a given space whereas the composition 〈·, ·〉 re-
quires an element from the space and another from the associated dual space. That aside,
the dual space of a BS may very well not be isomorphic to the BS itself. This will be the
case in a HS by the Riesz-representation theorem (stated below). With that theorem, one
may characterize any functional on a HS via the scalar product. This does not go the other
way around; i.e. on a BS, a dual space does not necessarily characterize a valid scalar product.
Also, one will notice that we only require that a SPS to be a LS (not a BS) with a
scalar product; but, it is easily verified that ||x || :=√
(x, x) satisfies the criteria stated in
the definition of a norm. Thus, any HS (SP’S) is a BS (NLS). This is not to say that the
only valid scalar product on a space is the one that generates the norm; case in point, Lp(Ω)
spaces are frequently considered with the norm
∫
Ω
f · g dx
where f , g ∈ Lp(Ω). It is easy to verify this satisfies definition 8.6.1 but does not generate
the Lp(Ω) norm.
Scalar products add a great deal of structure to the spaces we have been considering
previously. First, we can now start talking about the concept of “geometry” on a SPS.
We have seen that for any SPS, there exists a norm which can be analogized to length
in Euclidean space. Taking this analogy one step further, we can define the “angle”, φ,
between two elements x and y in a SPS, X, by cosφ := (x,y)||x ||·||y || . This leads directly to the
all-important orthogonality criterion x ⊥ y ⇐⇒ (x, y) = 0. In addition to geometry, there
are many useful inequalities that pertain to scalar products; here are a few.
• Cauchy Schwarz inequality: |(x, y)| ≤ ||x || · ||y ||
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
194Chapter 8: Facts from Functional Analysis and Measure Theory
• Triangle inequality: ||x + y || ≤ ||x ||+ ||y ||
• Parallelogram identity: ||x + y ||2 + ||x − y ||2 = 2(||x ||2 + ||y ||2)
Notation: From now on we will be referring to Hilbert Spaces, as it usually safe to assume
that we are working in a complete SPS.
With these concepts, we can now analyze some key theorems that pertain to Hilbert
Spaces.
Theorem 8.6.3 (Projection Theorem): If H is a HS and M is a closed subspace of H, then
each x ∈ H has a unique decomposition x = y +z with y ∈ M, z ∈ M⊥ (i.e. is perpendicular
to M, (z,M) = 0, ∀m ∈ M). Moreover dist(x,M) = ||z ||.
Proof: First, if x ∈ M then as x = x+0 we trivially get the conclusion. Thus, we consider
M 6= X with x /∈ M. Defining, d := dist(x,M), we select a minimizing sequence yn ⊂ Msuch that ||x − yn|| → d . Before proceeding, we need to show that yn is Cauchy. Utilizing
the parallelogram identity, we write
4
∣∣∣∣∣∣∣∣x −
1
2(yn + ym)
∣∣∣∣∣∣∣∣2
︸ ︷︷ ︸≥4d2
+||yn − ym||2 = 2(||x − yn||2 + ||x − ym||2)→ 4d2,
where the first term on the LHS is ≥ 4d2 as 12 (yn + ym) ∈ M. Thus, we see that
||yn − ym|| → 0; the sequence is Cauchy. Since X is complete and M is closed, we as-
certain that limn→∞ yn = y ∈ M.
Writing x = z + y , where z := x − y , the conclusion of the proof will follow provided
z ∈ M⊥. Consider any y∗ ∈ M and α ∈ R, as y + αy∗ ∈ M, it is calculated
d2 ≤ ||x − y − αy∗||2 = (z − αy∗, z − αy∗)= ||z ||2 − 2α(y∗, z) + α2||y∗||2
= d2 − 2α(y∗, z) + α2||y∗||2;
thus,
|(y∗, z)| ≤ α
2||y∗||2 ∀α ∈ R.
From this we conclude that |(y∗, z)| = 0; and since y∗ ∈ M was arbitrary, z ∈ M⊥.
One of the nicest properties of Hilbert Spaces, call it H, is that one can characterize
any bounded linear functional on H in terms of the scalar product. This is formalized by the
following theorem.
Theorem 8.6.4 (Riesz-representation Theorem): If F is a bounded linear function on a
Hilbert space H, then there exists a unique f ∈ H such that F (x) = (x, f ) and ||f || = ||F ||.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.6 Hilbert Spaces195
Proof: First, we consider F such that Ker(F ) = H; then, F = 0 and we correspondingly
choose f = 0. If, on the other hand, we have Ker(F ) 6= H, then ∃z ∈ H \ Ker(F ). By the
projection theorem, we may assume that z ∈ Ker(F )⊥ as it easily seen that Ker(F ) is a
proper subspace of H. Now, we see that
F
(x − F (x)
F (z)z
)= 0
=⇒ x − F (x)
F (z)z ∈ Ker(F ).
Since z ∈ Ker(F )⊥, we have by orthogonality
(x − F (x)
F (z)z, z
)= 0 ⇐⇒ (x, z) =
(F (x)
F (z)z, z
)=F (x)
F (z)||z ||2.
Solving for F (x) algebraically leads to
F (x) =
(x,F (z)
||z ||2 z)
= (x, f ), where f =F (z)
||z ||2 z.
Lastly, we need to show ||F || = ||f ||. So, on one hand, we have
||F || = supx∈H,x 6=0
|F (x)|||x || = sup
|(x, f )|||x ||
C-S≤ sup||x || · ||f ||||x || = ||f ||;
but on the other, its calculated that
||f ||2 = (f , f ) = F (f )C-S≤ ||F || · ||f || =⇒ ||f || ≤ ||F ||.
Thus, we conclude that ||f || = ||F ||.
Next, we will go over one of the key existence theorems pertaining to elliptic equations
but first a quick definition.
Definition 8.6.5: Give X a NLS, a mapping B : X × X → R is a bilinear form provided B
is linear in one argument when the other argument is fixed. B is bounded if there exists a
constant C0 > 0 such that
|B(x, y)| ≤ C0||x || · ||y ||
for all x, y ∈ X. Analogously, B is coercive if there exists a constant C1 > 0 such that
B(x, x) ≥ C1||x ||2
for all x ∈ X.
Warning: B is not necessarily symmetric in its arguments.
Theorem 8.6.6 (Lax-Milgram Theorem): If H is a HS and B : H × H → R is a bounded,
coercive bilinear form, then for all F ∈ H∗ there exists a unique f ∈ H such that
B(x, f ) = F (x).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
196Chapter 8: Facts from Functional Analysis and Measure Theory
Proof: First, fix f ∈ H and define
Gf (x) := B(x, f ) = (x, T f ), (8.8)
where the last equality comes from the application of the Riesz representation theorem to
Gf (x). Given B is coercive, we calculate
C1||f ||2 ≤ B(f , f ) = (f , T f ) ≤ ||f || · ||T f ||,
where C1 is the coercivity constant associated with B. Analogously, via the boundedness of
B,
||T f ||2 = (T f , T f ) = B(T f , f ) ≤ C0||f || · ||T f ||,
where C0 is the boundedness constant of B. From these last two equations, we derive that
C1||f || ≤ ||T f || ≤ C0||f ||, (8.9)
which is tantamount to R(T ) being closed (right inequality) and T itself being one-to-one
(left inequality). Now, we wish to show that T is onto, since this along with (8.9) would
imply that T−1 is well-define.
If T is not onto, then by the projection theorem, there exists z ∈ R⊥(T ) (R(T ) is easily
noticed to be a proper subspace of H). Thus, B(z, f ) = (z, T f ) = 0 for all f ∈ H. Choosing
f = z , we calculate that 0 = B(z, z) ≥ C1||z ||2 which implies z = 0, i.e. R(T ) = H. So
T−1 is indeed well-defined as we have now shown T to be onto.
The last step of the proof is to apply the Riesz representation theorem: for any F ∈ H∗,there exists a unique g ∈ H such that F (x) = (x, g) for all x ∈ H. With that, we get
F (x) = (x, g) = (x, TT−1g) = B(x, T−1g).
From this applied to (8.8), we see that f = T−1g. The existence of f is thus guaranteed as
we have shown T−1 to be well defined.
As was mentioned above, the Lax-Milgram theorem turns out to be the right tool for
proving existence of solutions of many elliptic equation. That being said, one could be for-
given for thinking that the coercivity criterion is somewhat artificially induced, as it will be
shown to correspond to the ellipticity criterion of a differential operator. It is true that this
correspondence is very useful, but the Lax-Milgram theorem actually has a deeper implication
than the existence of certain PDE.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.6 Hilbert Spaces197
Recalling the construction of Hilbert Spaces, one will remember that the properties of
such a space are all dependent on the definition of the associated scalar product. Indeed,
the norm and, hence, the topology of the space result from the particular definition of the
scalar product (on top of the assumed ambient properties of a linear space). Philosophically
speaking, Lax-Milgram indicates whether a given bilinear form acting as a scalar product
will “replicate” the topological structure an already-established HS. If this is found to be
the case, then one can view the conclusion of Lax-Milgram as a direct application of the
Riesz-representation theorem on the space with the bilinear form as the scalar product. Now
since topology is directly correlated to the norm of a HS, one can start to see the coercivity
condition (along with the boundedness of the bilinear form) as a stipulation on the bilinear
form that would guarantee the preservation of the topology of the HS if the scalar product
were replaced by the bilinear form itself. Of course the geometry, as described in the beginning
of the section, would be different; but this is not important, in this instance, as Lax-Milgram
only refers two what elements are contained in a space (a topological attribute). Hopefully,
this bit of insight will make the Lax-Milgram theorem (and existence theorems in general) a
little more intuitive for the reader.
8.6.1 Fredholm Alternative in Hilbert Spaces
With the additional structure of a HS, we are now able to realize more properties of adjoint
operators than was possible in BS.
Theorem 8.6.7 (): If H is a HS with T ∈ L(H,H), then T ∗ has the following properties:
i.) ||T ∗|| = ||T ||;
ii.) If T is compact, then T ∗ is also compact;
iii.) R(T ) = Ker(T ∗)⊥.
Proof:
i.) This is clear from the fact that
||T || = supx∈H
||Tx ||||x ||
(T ∗ has an analogous norm).
ii.) Let xn be a sequence in H satisfying ||xn|| ≤ M. Then
||T ∗xn||2 = (T ∗xn, T ∗xn) = (xn, TT∗xn)
≤ ||xn|| · ||TT ∗xn||≤ M||T || · ||T ∗xn||
so that ||T ∗xn|| ≤ M||T ||; that is, the sequence T ∗xn is also bounded. Hence,
since T is compact, by passing to a subsequence if necessary, we may assume that the
sequence TT ∗xn converges. But then
||T ∗(xn − xm)||2 = (T ∗(xn − xm), T ∗(xn − xm))
= (xn − xm, TT ∗(xn − xm))
≤ 2M||TT ∗(xn − xm)|| → 0 as m, n →∞.
SinceH is complete, the sequence T ∗xn is convergent and hence T ∗ is compact.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
198Chapter 8: Facts from Functional Analysis and Measure Theory
iii.) Fix arbitrary x ∈ H and set Tx =: y ∈ R(T ). From this, it is seen that (y , f ) =
(Tx, f ) = (x, T ∗f ) = 0 for all f ∈ Ker(T ∗). Thus, R(T ) ⊂ Ker(T ∗)⊥; and since
Ker(T ∗) is closed, R(T ) ⊂ Ker(T ∗)⊥. Now suppose that y ∈ Ker(T ∗)⊥ \ R(T ).
By the projection theorem, y = y1 + y2 where y1 ∈ R(T ) and y2 ∈ R(T )⊥ \ 0
Consequently, 0 = (y2, T x) = (T ∗y2, x) for all x ∈ H; hence, y2 ∈ Ker(T ∗). Therefore,
(y2, y) = (y2, y1) + ||y2||2 = ||y2||2. Thus, we see that (y2, y) 6= 0, i.e. y /∈ Ker(T ∗)⊥.
This contradiction on the assumption of y ∈ Ker(T ∗)⊥ \R(T ) leads to the conclusion
that R(T ) = Ker(T ∗)⊥.
Now, we can can combine the above properties of adjoints, the results pertaining to the
spectra of compact operators and the Fredholm Alternative on BS to directly ascertain the
following generalization.
Theorem 8.6.8 (Fredholm Alternative in Hilbert Spaces): Consider a HS, H and a com-
pact operator T : H → H. In this case, there exists an at most countable set σ(T ) having no
accumulation points except possibly λ = 0 such that the following holds: if λ 6= 0, λ /∈ σ(T )
then the equations
λx − Tx = y λx − T ∗x = y (8.10)
have a unique solution x ∈ H, for all y ∈ H. Moreover, the inverse mappings (λI − T )−1,
(λI − T ∗)−1 are bounded. If λ ∈ σ(T ), then Ker(λI − T ) and Ker(λI − T ∗) have positive
and finite dimension, and we can solve the equation (8.10) if and only if y is orthogonal to
Ker(λI − T ∗) and Ker(λI − T ) respectively for both cases.
There is a lot of information in this theorem; it’s worth the effort to try and get one’s mind
around it. Also, keep in mind that Ker(λI − T ) and Ker(λI − T ∗) are just the eigenspaces
Tλ and T ∗λ respectively.
8.7 Weak Convergence and Compactness; the Banach-Alaoglu
Theorem
To begin this section we begin with some motivation. Consider a sequence of approx-
imate solutions of F (u,Du,D2u) = 0, i.e. a bounded sequence uk ∈ X such that
F (uk , Duk , D2uk) = εk → 0. Now, there are two important questions that arise:
• Does bounded imply there is a convergent subsequence i.e. ukj → u?
• Is this limit the solution?
As we have seen that bounded sequences in infinite dimensional BS do not necessarily contain
convergent subsequences (i.e. in the strong sense), these questions are not readily answered
with our current “set of tools”. Given that such approximations as the one stated are
frequently utilized in PDE, we now seek to define a new criterion for convergence so that we
can indeed answer both these questions in the affirmative.
Definition 8.7.1: Take X to be a BS, with X∗ as its dual.
i.) xj ⊂ X converges weakly to x ∈ X if
〈y , xj〉 → 〈y , x〉 ∀y ∈ X∗.
Notation: uk converging weakly to u is denoted by uk u.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.7 Weak Convergence and Compactness; the Banach-Alaoglu Theorem199
ii.) yj ∈ X∗ converges weakly∗ to y ∈ X∗ if
〈yj , x〉 → 〈y , x〉 ∀x ∈ X
It can be readily seen that weak convergence implies strong convergence (that is conver-
gence in norm). Also, be the linearity of scalar products, it is also observed that weak limits
are unique.
Now, we present the key theorem which essential would enable us to answer “yes” to the
hypothetical situation presented at the beginning of the section.
Theorem 8.7.2 (Banach-Alaoglu): If X is a TVS, with Ω ⊂ X bounded, open set containing
the origin, then the unit ball in X∗ is sequentially weak∗ compact, i.e. define
K = F ∈ X∗ : |F (x)| ≤ 1 ∀x ∈ Ω,
then K is sequentially weak∗ compact.
Proof 2: First, since Ω is open, bounded and contains the origin, there corresponds to
each x ∈ X a number γx < ∞ such that x ∈ γxΩ, where aΩ :=x ∈ X : xa ∈ Ω
for any
fixed a ∈ R. Now for any x ∈ X, we know that xγx∈ Ω, hence
|F (x)| ≤ γ(x) (x ∈ X,Λ ∈ K).
Let Ax be the set of all scalars α such that |α| ≤ γx . Let τ be the product topology on
P , the Cartesian product of all Ax , one for each x ∈ X. Indeed, P will almost always be a
space with an uncountable dimension. By Tychonoff’s theorem, P is compact since each Exis compact. The elements of P are functions f on X (they may or may not be linear or even
continuous) that satisfy
|f (x)| ≤ γ(x) (x ∈ X).
Thus K ⊂ X∗ ∩ P , as K ⊂ X∗ and K ⊂ P . It follows that K inherits two topologies: one
from X∗ (its weak∗-topology, to which the conclusion of the theorem refers) and the other,
τ , from P . Again, see [Bre97] for the concept of inherited topologies.
We now claim that
i.) these two topologies coincide on K, and
ii.) K is closed subset of P .
If ii.) is proven, then we have that K is an uncountable Cartesian product of τ-compact sets
(each one contained in some Ax), hence it is compact by Tychonoff’s theorem. Moreover, if
i.) is shown to be true then K is concluded to be weak∗-compact from the previous statement.
So, we now see to prove our claims. To do this first fix F0 ∈ K; choose xi ∈ X, for
1 ≤ i ≤ n; and fix δ > 0. Defining
W1 := F ∈ X∗ : |Λxi − Λ0xi | < δ for 1 ≤ i ≤ n
and
W2 := f ∈ P : |f (xi)− Λ0xi | < δ for 1 ≤ i ≤ n,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
200Chapter 8: Facts from Functional Analysis and Measure Theory
we let n, xi , and δ range over all admissible values. The resulting sets W1, then form a
local base for the weak∗-topology of X∗ at Λ0 and the sets W2 form a local base for the
product topology τ of P at Λ0; i.e. any open set in X∗ or P can be represented as a union
of translates of the sets contained in W1 or W2 respectively. Since K ⊂ P ∩X∗, we have
W1 ∩K = W2 ∩K.
This proves (i).
Now, suppose f0 is in the τ-closure of K. Next choose x, y ∈ X with scalars α, β, and
ε > 0. The set of all f ∈ P such that |f − f0| < ε at x , at y , and at αx + βy is a τ-
neighborhood of f0. Therefore K contains such an f . Since this f ∈ K ⊂ X∗, and hence,
linear, we have
f0(αx + βy)− αf0(x)− βf0(y) = (f0 − f )(αx + βy)
+α(f − f0)(x) + β(f − f0)(y).
so that
|f0(αx + βy)− αf0(x)− βf0(y)| < (1 + |α|+ |β|)ε.
Since ε was arbitrary, ,we see that f0 is linear. Finally, if x ∈ Ω and ε > 0, the same argument
shows that there is an g ∈ K such that |g(x)− f0(x)| < ε. Since |g(x)| ≤ 1, by the definition
of K, if follows that |f0(x)| ≤ 1. We conclude that f0 ∈ K. This proves ii.) and hence the
theorem.
As an immediate consequence we have
Corollary 8.7.3 (): If X is a reflexive BS, then the unit ball in X is sequentially weakly
compact.
From this corollary and the Riesz-representation theorem, we have that any bounded set
in any HS is sequentially weakly compact (the Riesz-representation theory proves that all
HS are reflexive BS). This is a key point that will be used frequently in the upcoming linear
theory, and gives us the criterion under which we can answer “yes” to the questions posed
at the beginning of the section.
Note: With this section I have broken the sequential structure of the chapter in that one
only needs to have a TVS to speak of weak convergence, etc. The reason for presenting
this material now, is that it can only be motivated once the student starts to realize how
restrictive the notion of strong convergence in a BS/HS is. Thus, for motivation’s sake I
have decided to present this material now, late in the chapter.
8.8 Key Words from Measure Theory
This section is intended to be a whirlwind review of basic measure theory, i.e. one that is not
already familiar with the following concepts should consult with [Roy88] for a proper treatise
of such. In reality, this section is a glorified appendix, in that most of the following results
are not proven but are explained in relation to what has already been explained.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.8 Key Words from Measure Theory201
8.8.1 Basic Definitions
Let us recall the “typical” definition of a measure:
Definition 8.8.1: Consider the set of all sets of real numbers, P(R). A mapping µ : P(R)→[0,∞) is a measure if the following properties hold:
i.) for any interval l , µl = the length of l ;
ii.) if En is a sequence of disjoint sets, then
µ
(⋃
n
En
)=∑
n
µEn;
iii.) µ is translationally invariant; i.e., if E is a set for which µ is defined and if E + y :=
x + y : x ∈ E, then µ(E + y) = µE for all y ∈ R.
Now, we say this is the “typical” definition as there are actually different ways to define
a measure. To elaborate, in addition to the above three criterion, it would be nice to have µ
defined for all subsets of R; but no such mapping exists. Thus, one of these four criterion
must be compromised to actually get a valid definition; obviously, this may be done in a
number of ways. We take defintion 8.8.1 as a proper definition as the measures we will
consider, Lebesgue (and more esoterically Hausdorff), obey the criterion contained in such.
The next definition is a necessary precursor to defining Lebesgue measure.
Definition 8.8.2: The outer measure, denoted µ∗ is defined for any A ⊆ R as
µ∗A = infA⊂⋃n In
∑
n
l(In),
where In are intervals in R and l(In) is the length of In.
Now, even though µ∗ is defined for all subsets of R it does not satisfy condition iii.) of
definition 8.8.1, which is undesirable for integration. Thus, we go about restricting the sets
of consideration to guarantee that iii.) is satisfied by the following definition.
Definition 8.8.3: A set E ⊂ R is Lebesgue measurable if
µ∗A = µ∗(A ∩ E) + µ∗(A ∩ Ec),
for all A ⊂ R. Moreover, the Lebesgue measure of E is just µ∗E.
It is left to the reader that definition does indeed imply condition iii.) of definition ; again,
see [Roy88] if you need help.
Denoting the set of all measurable sets B, let us review one of the properties of B. To
do this, we remember the following definition
Definition 8.8.4: A group of sets A is a σ-ring on a space X if the following are true:
• X ∈ A;
• if E ∈ A, then Ac := X \ A ∈ A;
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
202Chapter 8: Facts from Functional Analysis and Measure Theory
• and if Ei ∈ A then∞⋃
i=1
Ei ∈ A.
Canonically extending the above concept of Lebesgue measure on R to Rn by replacing
intervals with Cartesian products of intervals (and lengths with volumes), it is true that B is
a σ-ring over Rn.
Now, we move on to some important definitions pertaining to general measures.
Definition 8.8.5: N ∈ B is called a set of µ-measure zero if µ(N) = 0. This ie equivalent
to saying there exists open sets Ai ⊂ B with⋃Ai ⊂ N such that µ (
⋃Ai) < ε.
Definition 8.8.6: A property hold µ-a.e. (µ almost everywhere) if the set of all x for which
the property does not hold is a set of µ-measure zero.
Lastly, we go over what it means for a function to be differentiable.
Definition 8.8.7: A function f : Rn → R is B measurable if x : f (x) ∈ U ∈ B for all open
sets U in R.
8.8.2 Integrability
For integrability studies, we first define a special clase of functions.
Definition 8.8.8: A function f is simple if f is a finite non-zero constant on finitely many
disjoint sets Ai which are measurable and zero elsewhere.
A simple function f is said to be integrable if∑|fi | · µ(Ai) <∞;
and thus, ∫f dµ =
∑fi · χAi ,
where
χAi
=
1 x ∈ Ai0 x /∈ Ai
Note: Keep in mind that µ is a general measure; thus, the above criterion for integrability
of a simple function is dependent on our choice of µ.Still considering a general measure, we
now make the following generalization.
Definition 8.8.9: Consider a general function g and a sequence of simple functions fnsuch that g = lim fn a.e. g is said to be integrable (with respect to measure µ) if
limn,k→∞
∫|fn(x)− fk(x)| dλ = 0.
Moreover, since the limit
limk→∞
∫fk dµ
thus exists, we see that the limit is independent of the choice of approximating sequence.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.8 Key Words from Measure Theory203
Now, we specify µ to be henceforth the Lebesgue measure. The corresponding Lebesgue
integrals have nice convergence properties, i.e. there exists a good variety of conditions
when one can interchange limits and integrals. We now rapidly go over these convergence
theorems, referring the reader to [Roy88] for proofs.
Theorem 8.8.10 (Lebesgue’s Dominated Convergence): If g is integrable with some fi →f a.e., where f is measurable and |fi | ≤ g for all i , then f is integrable. Moreover,
∫fi dµ→
∫f dµ
[limi→∞
∫fi dµ =
∫limi→∞
fi dµ
].
Theorem 8.8.11 (Monotone Convergence (Beppo-Levi)): If fi is a sequence of integrable
functions such that 0 ≤ fi f monotonically and
∫fi dµ ≤ C <∞,
then f integrable and
∫f dµ = lim
i→∞
∫fi dµ.
Lemma 8.8.12 (Fatou’s): If fi is a sequence of integrable functions with f ≥ 0 and
∫fi dλ ≤ C <∞,
then lim inf fi is integrable and
∫limi→∞
fi dµ ≤ limi→∞
inf
∫fi dµ
Theorem 8.8.13 (Egoroff’s): Take fi to be a sequence of functions such that fi → f a.e.
on a set E, with f measurable, where E is measurable and µ(E) < ∞, then ∀ε > 0, ∃Eεmeasurable such that λ(E \ Eε) < ε and fi → f uniformly on Eε.
The next theorem technically states that multi-dimensional integrals can indeed be com-
puted via an iterative scheme.
Theorem 8.8.14 (Fubini’s): i.) If f : Rn × Rm → R is measurable such that
∫
Rn
∫
Rm|f (x, y)| dxdy
exists, then
∫
Rn
∫
Rmf (x, y) dxdy =
∫
Rng(y) dy,
where
g(y) =
∫
Rmf (x, y) dx
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
204Chapter 8: Facts from Functional Analysis and Measure Theory
ii.) On the otherhand, if
g(y) =
∫
Rm|f (x, y)| dx <∞
for almost every y with∫
Rng(y) dy <∞,
then∫
Rn
∫
Rm|f (x, y)| dxdy <∞
8.8.3 Calderon-Zygmund Decomposition
[Cube Decomposition]
A useful construction for some measure theoretic arguments in the forthcoming chapters is
that of cube decomposition. This is a recursive construction that enables one to ascertain
estimates for the measure of a set imbedded in Rn.
First, consider a cube K0 ∈ Rn, f a non-negative integrable function, and t > 0 all satisfying∫
K0
f dx ≤ t|K0|,
where | · | indicates the Lebesgue measure of the set. Now, by bisecting the edges of K0 we
divide K0 into 2n congruent subcubes whose interiors are disjoint. The subcubes K which
satisfy∫
K
f dx ≤ t|K| (8.11)
are similarly divided and the process continues indefinately. Set K∗ be the set of subcubes
K thus obtained that satisfy∫
K
f dx > t|K|,
and for each K ∈ K∗, denote by K the subcube whose subdivision gives K. It is clear that
|K||K| = 2n
and that for any K ∈ K∗, we have
t <1
|K|
∫
K
f dx ≤ 2nt.
Moreover, setting
F =⋃
K∈K∗K
and G = K0 \ F , we have
f ≤ t a.e. in G. (8.12)Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.8 Key Words from Measure Theory205
This inequality is a consequence of Lebesgue’s differention theorem, as each point in G lies
in a nested sequence of cubes satisfying (8.11) with the diameters of the sequence of cubes
tending to zero.
For some pointwise estimates, we also need to consider the set
F =⋃
K∈K∗K
which satisfies by (8.11),
∫
F
f dx ≤ t|F |. (8.13)
In particular, when f is the characteristic function χΓ of a measurable subset Γ of K0, we
obtain from (8.12) and (8.13) that
|Γ | = |Γ ∩ F | ≤ t|F |
8.8.4 Distribution functions
Let f be a measurable function on a domain Ω (bounded or unbounded in Rn. The distribution
function µ = µf of f is defined by
µ(t) = µf (t) = |x ∈ Ω|f (x) > t|
for t > 0, and measures the relative size of f . Note that µ is a decreasing function on (0,∞).
The basic properities of the distribution function are embodied in the following lemma.
Lemma 8.8.15 (): For any p > 0 and |f |p ∈ L1(Ω), we have
µ(t) ≤ t−p∫
Ω
|f |p dx (8.14)
∫
Ω
|f |p dx = p
∫ ∞
0
tp−1µ(t) dt. (8.15)
Proof: Clearly
µ(t) · tp ≤∫
t≤|f ||f |p dx
≤∫
Ω
|f |p dx
for all t > 0 and hence (8.14) follows. Next, suppose that f ∈ L1(Ω). Then, by Fubini’s
Theorem
∫
Ω
|f | dx =
∫
Ω
∫ |f (x)|
0
dtdx
=
∫ ∞
0
µ(t) dt,
and the result (8.15) for general p follows by a change of variables. .
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
206Chapter 8: Facts from Functional Analysis and Measure Theory
8.9 Exercises
8.1: Let X = C0([−1, 1]) be the space of all continuous function on the interval [−1, 1], Y =
C1([−1, 1]) be the space of all continuously differentiable functions, and let Z = C10 ([−1, 1])
be the space of all function u ∈ Y with boundary values u(−1) = u(1) = 0.
a) Define
(u, v) =
∫ 1
−1
u(x)v(x) dx.
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2. Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
b) Define
(u, v) =
∫ 1
−1
u′(x)v ′(x) dx.
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2. Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
8.2: Suppose that Ω is an open and bounded domain in Rn. Let k ∈ N, k ≥ 0, and γ ∈ (0, 1].
We define the norm || · ||k,γ in the following way. The γth-Holder seminorm on Ω is given by
[u]γ;Ω = supx,y∈Ω,x 6=y
|u(x)− u(y)||x − y |γ ,
and we define the maximum norm by
|u|∞;Ω = supx∈Ω|u(x)|.
The Holder space Ck,γ(Ω) consists of all functions u ∈ Ck(Ω) for which the norm
||u||k,γ;Ω =∑
|α|≤k||Dαu||∞;Ω +
∑
|α|=k[Dαu]γ;Ω
is finite. Prove that Ck,α(Ω) endowed with the norm || · ||k,γ;Ω is a Banach space.
Hint: You may prove the result for C0,γ([0, 1]) if you indicate what would change for the
general result.
8.3: Consider the shift operator T : l2 → l2 given by
(x)i = xi+1.
Prove the following assertion. It is true that
x− Tx = 0 ⇐⇒ x = 0,Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.9 Exercises207
however, the equation
x− Tx = y
does not have a solution for all y ∈ l2. This implies that T is not compact and the Fredholm’s
alternative cannot be applied. Give an example of a bounded sequence xj such that Txj does
not have a convergent subsequence.
8.4: The space l2 is a Hilbert space with respect to the scalar product
(x, y) =
∞∑
i=1
xiyi .
Use general theorems to prove that the sequence of unit vectors in l2 given by (ei)j = δi jcontains a weakly convergent subsequence and characterize the weak limit.
8.5: Find an example of a sequence of integrable functions fi on the interval (0, 1) that
converge to an integrable function f a.e. such that the identity
limi→∞
∫ 1
0
fi(x) dx =
∫ 1
0
limi→∞
fi(x) dx
does not hold.
8.6: Let a ∈ C1([0, 1]), a ≥ 1. Show that the ODE
−(au′)′ + u = f
has a solution u ∈ C2([0, 1]) with u(0) = u(1) = 0 for all f ∈ C0([0, 1]).
Hint: Use the method of continuity and compare the equation with the problem −u′′ = f .
In order to prove the a-priori estimate derive first the estimate
sup |tut | ≤ sup |f |
where ut is the solution corresponding to the parameter t ∈ [0, 1].
8.7: Let λ ∈ R. Prove the following alternative: Either (i) the homogeneous equation
−u′′ − λu = 0
has a nontrivial solution u ∈ C([0, 1]) with u(0) = u(1) = 0 or (ii) for all f ∈ C0([0, 1]),
there exists a unique solution u ∈ C2([0, 1]) with u(0) = u(1) = 0 of the equation
−u′′ − λu = f .
Moreover, the mapping Rλ : f 7→ u is bounded as a mapping from C0([0, 1]) into C2([0, 1]).
Hint: Use Arzela-Ascoli.
8.8: Let 1 < p ≤ ∞ and α = 1 − 1p . Then there exists a constant C that depends
only on a, b, and p such that for all u ∈ C1([a, b]) and x0 ∈ [a, b] the inequality
||u||C0,α([a,b]) ≤ |u(x0)|+ C||u′||Lp([a,b])
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
208Chapter 8: Facts from Functional Analysis and Measure Theory
holds.
8.9: Let X = L2(0, 1) and
M =
f ∈ X :
∫ 1
0
f (x) dx = 0
.
Show that M is a closed subspace of X. Find M⊥ with respect to the notion of orthogonality
given by the L2 scalar product
(·, ·) : X ×X → R, (u, v) =
∫ 1
0
u(x)v(x) dx.
According to the projection theorem, every f ∈ X can be written as f = f1 + f2 with f1 ∈ Mand f2 ∈ M⊥. Characterize f1 and f2.
8.10: Suppose that pi ∈ (1,∞) for i = 1, . . . , n with
1
p1+ · · ·+ 1
pn= 1.
Prove the following generalization of Holder’s inequality: if fi ∈ Lpi (Ω) then
f =
n∏
i=1
fi ∈ L1(Ω) and
∣∣∣∣∣
∣∣∣∣∣n∏
i=1
fi
∣∣∣∣∣
∣∣∣∣∣L1(Ω)
≤n∏
i=1
||fi ||Lpi (Ω).
8.11: [Gilbarg-Trudinger, Exercise 7.1] Let Ω be a bounded domain in Rn. If u is a measurable
function on Ω such that |u|p ∈ L1(Ω) for some p ∈ R, we define
Φp(u) =
(1
|Ω|
∫
Ω
|u|p dx) 1
p
.
Show that
limp→∞
Φp(u) = supΩ|u|.
8.12: Suppose that the estimate
||u − uΩ||Lp(Ω) ≤ CΩ||Du||Lp(Ω)
holds for Ω = B(0, 1) ⊂ Rn with a constant C1 > 0 for all u ∈ W 1,p0 (B(0, 1)). Here uΩ
denotes the mean value of u on Ω, that is
uΩ =1
|Ω|
∫
Ω
u(z) dz.
Find the corresponding estimate for Ω = B(x0, R), R > 0, using a suitable change of coor-
dinates. What is CB(x0,R) in terms of C1 and R?
8.13: [Qualifying exam 08/99] Let f ∈ L1(0, 1) and suppose that
∫ 1
0
f φ′ dx = 0 ∀φ ∈ C∞0 (0, 1).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.9 Exercises209
Show that f is constant.
Hint: Use convolution, i.e., show the assertion first for fε = ρε ∗ f .
8.14: Let g ∈ L1(a, b) and define f by
f (x) =
∫ x
a
g(y) dy.
Prove that f ∈ W 1,1(a, b) with Df = g.
Hint: Use the fundamental theorem of Calculus for an approximation gk of g. Show that
the corresponding sequence fk is a Cauchy sequence in L1(a, b).
8.15: [Evans 5.10 #8,9] Use integration by parts to prove the following interpolation in-
equalities.
a) If u ∈ H2(Ω) ∩H10(Ω), then
∫
Ω
|Du|2 dx ≤ C(∫
Ω
u2 dx
) 12(∫
Ω
|D2u|2 dx) 1
2
b) If u ∈ W 2,p(Ω) ∩W 1,p0 (Ω) with p ∈ [2,∞), then
∫
Ω
|Du|p dx ≤ C(∫
Ω
|u|p dx) 1
2(∫
Ω
|D2u|p dx) 1
2
.
Hint: Assertion a) is a special case of b), but it might be useful to do a) first. For simplicity,
you may assume that u ∈ C∞c (Ω). Also note that
∫
Ω
|Du|p dx =
n∑
i=1
∫
Ω
|Diu|2|Du|p−2 dx.
8.16: [Qualifying exam 08/01] Suppose that u ∈ H1(R), and for simplicity suppose that u
is continuously differentiable. Prove that
∞∑
n=−∞|u(n)|2 <∞.
8.17: Suppose that n ≥ 2, that Ω = B(0, 1), and that x0 ∈ Ω. Let 1 ≤ p <∞. Show that
u ∈ C1(Ω\x0) belongs to W 1,p(Ω) if u and its classical derivative ∇u satisfy the following
inequality,∫
Ω\x0(|u|p + |∇u|p) dx <∞.
Note that the examples from the chapter show that the corresponding result does not hold
for n = 1. This is related to the question of how big a set in Rn has to be in order to be
“seen” by Sobolev functions.
Hint: Show the result first under the additional assumption that u ∈ L∞(Ω) and then
consider uk = φk(u) where φk is a smooth function which is close to
ψk(s) =
k if s ∈ (k,∞),
s if s ∈ [−k, k ],
−k if s ∈ (−∞,−k).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
210Chapter 8: Facts from Functional Analysis and Measure Theory
You could choose φk to be the convolution of ψk with a fixed kernel.
8.18: Give an example of an open set Ω ⊂ Rn and a function u ∈ W 1,∞(Ω), such that
u is not Lipschitz continuous on Ω.
Hint: Take Ω to be the open unit disk in R2, with a slit removed.
8.19: Let Ω = B(
0, 12
)⊂ Rn, n ≥ 2. Show that log log 1
|x | ∈ W 1,n(Ω).
8.20: [Qualifying exam 01/00]
a) Show that the closed unit ball in the Hilbert space H = L2([0, 1]) is not compact.
b) Show that g : [0, 1]→ R is a continuous function and define the operator T : H → H
by
(Tu)(x) = g(x)u(x) for x ∈ [0, 1].
Prove that T is a compact operator if and only if the function g is identically zero.
8.21: [Qualifying exam 08/00] Let Ω = B(0, 1) be the unit ball in R3. For any function
u : Ω → R, define the trace Tu by restricting u to ∂Ω, i.e., Tu = u∣∣∂Ω
. Show that
T : L2(Ω)→ L2(∂Ω) is not bounded.
8.22: [Qualifying exam 01/00] Let H = H1([0, 1]) and let Tu = u(1).
a) Explain precisely how T is defined for all u ∈ H, and show that T is a bounded linear
functional on H.
b) Let (·, ·)H denote the standard inner product inH. By the Riesz representation theorem,
there exists a unique v ∈ H such that Tu = (u, v)H for all u ∈ H. Find v .
8.23: Let Ω = B(0, 1) ⊂ Rn with n ≥ 2, α ∈ R, and f (x) = |x |α. For fixed α, determine
for which values of p and q the function f belongs to Lp(Ω) and W 1,q(Ω), respectively. Is
there a relation between the values?
8.24: Let Ω ⊂ Rn be an open and bounded domain with smooth boundary. Prove that
for p > n, the space W 1,p(Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p(Ω), then f g ∈ W 1,p(Ω).
8.25: [Evans 5.10 #11] Show by example that if we have ||Dhu||L1(V ) ≤ C for all 0 <
|h| < 12dist (V, ∂U), it does not necessarily follow that u ∈ W 1,1(Ω).
8.26: [Qualifying exam 01/01] Let A be a compact and self-adjoint operator in a Hilbert
space H. Let ukk∈N be an orthonormal basis of H consisting of eigenfunctions of A with
associated eigenvalues λkk∈N. Prove that if λ 6= 0 and λ is not an eigenvalue of A, then
A− λI has a bounded inverse and
||(A− λI)−1|| ≤ 1
infk∈N |λk − λ|<∞.
8.27: [Qualifying exam 01/03] Let Ω be a bounded domain in Rn with smooth boundary.Gregory T. von Nessi
Versio
n::3
11
02
01
1
8.9 Exercises211
a) For f ∈ L2(Ω) show that there exists a uf ∈ H10(Ω) such that
||f ||2H−1(Ω) = ||uf ||2H10(Ω)
= (f , uf )L2(Ω).
b) Show that L2(Ω) ⊂⊂−→ H−1(Ω). You may use the fact that H10(Ω) ⊂⊂−→ L2(Ω).
8.28: [Qualifying exam 08/02] Let Ω be a bounded, connected and open set in Rn with
smooth boundary. Let V be a closed subspace of H1(Ω) that does not contain nonzero
constant functions. Using H1(Ω) ⊂⊂−→ L2(Ω), show that there exists a constant C < ∞,
independent of u, such that for u ∈ V ,
∫
Ω
|u|2 dx ≤ C∫
Ω
|Du|2 dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
212Chapter 9: Function Spaces
9 Function Spaces
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
9.1 Introduction 216
9.2 Holder Spaces 217
9.3 Lebesgue Spaces 218
9.4 Morrey and Companato Spaces 225
9.5 The Spaces of Bounded Mean Oscillation 238
9.6 Sobolev Spaces 240
9.7 Sobolev Imbeddings 261
9.8 Difference Quotients 271
9.9 A Note about Banach Algebras 274
9.10 Exercises 274
9.1 Introduction
A central goal to the theoretical study of PDE is to ascertain properties of solutions of
PDE that are not directly attainable by direct analytic means. To this end, one usually
seeks to classify solutions of PDE belonging to certain functions spaces whose elements
have certain known properties. This is, by no means, trivial, as the process requires not only
considering/defining appropriate function spaces but also elucidating the properties of such.
In this chapter we will consider some of the most prolific spaces in theoretical PDE along
with some of their respective properties.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.2 Holder Spaces213
9.2 Holder Spaces
To start the discussion on function spaces, Holder spaces will first be considered as they are
easier to construct than Sobolev Spaces.
Consider open Ω ⊂ Rn and 0 < γ ≤ 1. Recall that a Lipschitz continuous function
u : Ω→ R satisfies
|u(x)− u(y)| ≤ C|x − y | (x, y ∈ Ω)
for some constant C. Of course, this estimates implies that u is continuous with a uniform
modulus of continuity. A generalization of the Lipschitz condition is
|u(x)− u(y)| ≤ C|x − y |γ (x, y ∈ Ω)
for some constant C. If this estimate holds for a function u, that function is said to be
Holder continuous with exponent γ.
Definition 9.2.1: (i) If u : Ω→ R is bounded and continuous, we write
||u||C(Ω) := supx∈Ω|u(x)|.
(ii) The γth-Holder seminorm of u : Ω→ R is
[u]C0,γ(Ω) := supx, y ∈ Ωx 6= y
|u(x)− u(y)||x − y |γ
,
and the γth-Holder norm is
||u||C0,γ(Ω) := ||u||C(Ω) + [u]C0,γ(Ω)
Definition 9.2.2: The Holder space
Ck,γ(Ω)
consists of all functions u ∈ Ck(Ω) for which the norm
||u||Ck,γ(Ω) :=∑
|α|≤k||Dαu||C(Ω) +
∑
|α|=k[Dαu]C0,γ(Ω)
is finite.
So basically, Ck,γ(Ω) consists of those functions u which are k-times continuous differen-
tiable with the kth-partial derivatives Holder continuous with exponent γ. As one can easily
ascertain, these functions are very well-behaved, but these spaces also have a very good
mathematical structure.
Theorem 9.2.3 (): The function spaces Ck,γ(Ω) are Banach spaces.
The proof of this is straight-forward and is left as an exercise for the reader.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
214Chapter 9: Function Spaces
9.3 Lebesgue Spaces
Notation: E is a measurable set, and Ω is an open set ⊂ Rn.
Definition 9.3.1: For 1 ≤ p <∞ we define
Lp(E) = f : E → R, measurable,
∫
E
|f |p <∞
L∞(E) = f : E → R, ess sup |f | <∞where ess sup|f | = infK ≥ 0, |f | ≤ K a.e..
Two functions are said to be equivalent if f = g a.e. Then we define
Lp(E) = Lp(E)/ ∼Theorem 9.3.2 (): The spaces Lp(E) are BS with
||f ||p =
[∫
E
|f |p dλ]1/p
1 ≤ p <∞
||f ||∞ = ess sup |f |
• Lp(E) = f : E → R,∫|f |p dx < ∞. These are indeed equivalence classes of
functions (as stated in the last lecture). In general we can regard this as a space of
functions. We do however, need to be careful sometimes. The main such point is
that saying that f ∈ Lp is continuous means: there exists a representative of f that is
continuous. Sometimes referred to as “a version of f ”.
• Lploc(Ω) = f : Ω→ R measurable. ∀Ω′ ⊂⊂ Ω,∫
Ω′ |f |p dx <∞
Example: L1((0, 1))
f (x) = 1x /∈ L1((0, 1)), f ∈ L1
loc((0, 1))
Theorem 9.3.3 (Young’s Inequality): 1 < p <∞, a, b ≥ 0
ab ≤ ap
p+bp′
p′,
where p′ is the dual exponent to p defined by
1
p+
1
p′= 1
General convention (1)′ =∞, (∞)′ = 1.
Proof: a, b 6= 0, ln is convex downward (concave). Thus,
ln(λx + (1− λ)x) ≥ λ ln x + (1− λ) ln y
= ln xλ + ln y1−λ
take exponential of both sides
λx + (1− λ)y > xλy1−λ
Define λ = 1p , xλ = a, y1−λ = b =⇒ Young’s inequality.
1− λ = 1− 1
p=
1
p′
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.3 Lebesgue Spaces215
Theorem 9.3.4 (Holder’s Inequality): f ∈ Lp(E), g ∈ Lp′(E), then f g ∈ L1(E) and
∣∣∣∣∫
E
f g
∣∣∣∣ ≤ ||f ||p||g||p′
Proof: Young’s inequality
|g|||g||p′
· |f |||f ||p≤ 1
p
|f |p||f ||pp
+1
p′|g|p′
||g||p′p′
Clear for p = 1 and p =∞. Assume 1 < p <∞ to use Young’s inequality. Integrate on E.
∫
E
|g|||g||p′
· |f |||f ||p≤ 1
p
∫
E
|f |p||f ||pp
+1
p′
∫
E
|g|p′
||g||p′p′=
1
p+
1
p′= 1
=⇒∫
E
|f | · |g| dx ≤ ||f ||p||g||p′
Remark: Sometimes f = 1 is a good choice
∫
E
1 · |g| dx ≤(∫
E
1p) 1
p(∫
E
gp′) 1
p′
= |E|1p ||g||p′
You gain |E|1p .
Lemma 9.3.5 (Minkowski’s inequality): Let 1 ≤ p ≤ ∞, then ||f + g||p ≤ ||f ||p + ||g||p.
This is clear for p = 1 and p =∞.
Assume 1 < p <∞. Convexity of xp implies that
|f + g|p ≤ 2p−1(|f |p + |g|p) =⇒ f + g ∈ Lp
|f + g|p = |f + g||f + g|p−1
≤ |f |︸︷︷︸Lp
|f + g|p−1
︸ ︷︷ ︸Lp′
+ |g|︸︷︷︸Lp
|f + g|p−1
︸ ︷︷ ︸Lp′
p′ = pp−1 , (|f + g|p−1)p
′= |f + g|p. Integrate and use Holder
∫|f + g|p dx ≤ ||f ||p
(∫|f + g|(p−1)p′
) 1p′
+ ||g||p(∫|f + g|(p−1)p′
) 1p′
= (||f ||p + ||g||p)
(∫|f + g|p
) 1p′
= (||f ||p + ||g||p) ||f + g||pp′p
Thus, we have
||f + g||p−pp′
p ≤ ||f ||p + ||g||p
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
216Chapter 9: Function Spaces
Now p − pp′ = p
(1− 1
p′
)= p 1
p = 1. Thus we have the result.
Proof of 9.3.2(Lp is complete) p =∞ (“simpler than 1 ≤ p <∞”).
If fn is Cauchy, then fn(x) is Cauchy for a.e. x and you can define the limit f (x) a.e.
Theorem (Riesz-Fischer) Lp complete for 1 ≤ p <∞
Proof: Suppose that fk is Cauchy, ∀ε > 0,∃k(ε) such that
||fk − fl || < ε if k, l ≥ k(ε)
Choose a subsequence that converges fast in the sense that
∞∑
i=1
||fki+1− fki ||p <∞
relabel the subsequence and call it again fk . Now define
gl =
l∑
k=1
|fk+1 − fk |
then gl ≥ 0 and by Minkowski’s inequality, ||gl ||p ≤ C. By Fatou’s Lemma∫
liml→∞
gpl dx ≤ liml→∞
inf
∫gpl dx
≤(
liml→∞
inf ||gl ||p)p
<∞
Thus liml→∞ gpl is finite for a.e. x , limk→∞ gl(x) exists =⇒ fk(x) is a Cauchy sequence for
a.e. x . Thus limk→∞ fk(x) = f (x) exists a.e. By Fatou∫|f − fl |p dx =
∫limk→∞
|fk(x)− fl(x)|p dx
≤ limk→∞
inf
∫|fk − fl |p dx
≤(
limk→∞
inf ||fk − fl ||p)p
≤(
limk→∞
inf
∞∑
k=l
||fk − fk+1||p)p→ 0 as l →∞
=⇒ ||fl − f ||p → 0
We know f is in Lp by a simple application of Minkowski’s inequality to the last result.
Sobolev Spaces ⇐⇒ f ∈ Lp with derivatives.
Calculus for smooth functions is linked to the calculus of Sobolev functions by ideas of
approximation via convolution methods.
Theorem 9.3.6 (): (see Royden for proof) Let 1 ≤ p < ∞. Then the space of continuous
functions with compact support is dense in in Lp, i.e.
C0c (Ω) ⊂ Lp(Ω) dense
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.3 Lebesgue Spaces217
Idea of Proof:
1. Step functions
∞∑
i=1
aiχEi are dense
2. Approximate χEi by step functions on cubes.
3. Construct smooth approximation for χO, for cube O
Q
9.3.1 Convolutions and Mollifiers
• Sobolev functions = Lp functions with derivatives
• Convolution = approximation of Lp functions by smooth function approximation of the
Dirac δ
– φ ∈ C∞c (Rn), supp(φ) ∈ B(0, 1)
– φ ≥ 0,
∫
Rnφ(x) dx = 1
Rescaling: φε(x) = 1εnφ(xε
),
∫
Rnφε(x) dx = 1. φε(x) → 0 as ε → 0 pointwise a.e. and
∫
Rnφε(x) dx = 1.
B(0, 1)B(0, ǫ)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
218Chapter 9: Function Spaces
Definition 9.3.7: The convolution,
fε(x) = (φε ∗ f )(x)
=
∫
Rnφε(x − y)f (y) dy
=
∫
B(x,ε)
φε(x − y)f (y) dy
is the weighted average of f on a ball of radius ε > 0.
Theorem 9.3.8 (): fε ∈ C∞
Proof: Difference Quotients and Lebesgue’s Dominated convergence theorem.
Now, we ask the question f ∈ Lp, fε → f in Lp. If p = ∞ fε → f implies f ∈ C0
which implies f ∈ Lp. We know f ∈ C0 since fε ∈ C∞ ⊂ C0.
Ω′′
Ω
Ω′
ǫ
d
Lemma 9.3.9 (): f ∈ C0(Ω), Ω′′ ⊂⊂ Ω compact subsets, then fε → f uniformly on Ω′′!
Proof:
d = dist(Ω′, ∂Ω)
= inf|x − y | : x ∈ Ω′, y ∈ ∂Ω
Ω′′ = x ∈ Ω,dist(x,Ω′) ≤ d2.
Let ε ≤ d2 , x ∈ Ω′ =⇒ B(x, ε) ⊂⊂ Ω′′, and fε(x) is well defined
|fε(x)− f (x)| =
∣∣∣∣∫
Rnφε(x − y)[f (y)− f (x)] dy
∣∣∣∣
≤ supy∈B(x,ε)
|f (y)− f (x)|∣∣∣∣∫
Rnφε(x − y) dy
∣∣∣∣︸ ︷︷ ︸
=1
——
f ∈ C0(Ω) =⇒ f uniformly continuous on Ω′′ ⊂⊂ Ω. Thus,
ω(ε) = sup|f (x)− f (y)|, x, y ∈ Ω′′, |x − y | < ε → 0
as ε→ 0Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.3 Lebesgue Spaces219
——
≤ ω(ε)→ 0 uniformly on Ω′
as ε→ 0
Proposition 9.3.10 (): If u, v ∈ L1(Rn), then u ∗ v ∈ L1(Rn) ∩ C0(Rn). This is a special
case of the following: given f ∈ L1(Rn) and g ∈ Lp(Rn) with p ∈ [1,∞], then f ∗ g ∈Lp(Rn) ∩ C0(Rn).
Proof: We will prove the specialized case (p = 1) first.
||f ∗ g||L1(Rn)| =
∫
Rn
∣∣∣∣∫
Rnf (x − y)g(y) dy
∣∣∣∣ dx
≤∫
Rn|g(y)|
∫
Rn|f (x − y)| dxdy
=
∫
Rn|g(y)|
∫
Rn|f (z)| dzdy = ||f ||L1(Rn)||g||L1(Rn)
There is nothing to show for p =∞. For p ∈ (1,∞), we use Holder’s inequality to compute
||f ∗ g||pLp(Rn)
=
∫
Rn
∣∣∣∣∫
Rnf (y)g(x − y) dy
∣∣∣∣p
dx
≤∫
Rn
(∫
Rn|f (y)|
1p′ |f (y)|
1p |g(x − y)| dy
)pdx
≤∫
Rn
(∫
Rn|f (y)| dy
) pp′(∫
Rn|f (y)| · |g(x − y)|p dy
)dx
= ||f ||p/p′L1(Rn)
∫
Rn
∫
Rn|f (y)| · |g(x − y)|p dydx
= ||f ||p/p′L1(Rn)
∫
Rn|f (y)|
∫
Rn|g(x − y)|p dydx
= ||f ||p/p′L1(Rn)
||f ||L1(Rn)||g||pLp(Rn)= ||f ||p
L1(Rn)||g||p
Lp(Rn)
which proves the assertion.
Lemma 9.3.11 (): f ∈ Lp(Ω), 1 ≤ p <∞. Then fε → f in Lp(Ω).
Remarks:
• The same assertion holds in Lploc(Ω).
• C∞c (Ω) is dense in Lp(Ω).
Proof: Extend f by 0 to Rn: assume Ω = Rn. By Proposition 8.31, we see that if f ∈ Lp(Rn),
g ∈ L1(Rn), then f ∗ g ∈ Lp(Rn) and ||f ∗ g||p ≤ ||f ||p||g||1.
So, take
g = φε : ||fε||p = ||f ∗ φε||p ≤ ||f ||p · ||φε||1 = ||f ||p (9.1)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
220Chapter 9: Function Spaces
Need to show ∀δ > 0, ∃ε(δ) > 0, ||fε − f ||p < δ, ∀ε ≤ ε(δ).
By Theorem 8.29, ∃f1 ∈ C0c (Ω) such that
f = f1 + f2, ||f2||p = ||f − f1||p <δ
3
||fε − f || = ||(f1 + f2)ε + (f1 + f2)||p≤ ||f1ε − f1||p + ||f2ε − f2||p≤ ||f1ε − f1||p + ||f2ε||p + ||f ||p≤ ||f1ε − f1||p +
2δ
3by (9.1)
f1 has compact support in Ω, supp(f1) ⊂ Ω′ ⊂⊂ Ω, d = dist(Ω′, ∂Ω), Ω′′ =x ∈ Ω′,dist(x,Ω′) ≤ d
2
.
Now suppose that ε < d2 . By Lemma 8.30, f1ε → f1 uniformly in Ω′′ and thus in Ω (since
f1, f1ε = 0 in Ω \Ω′′). Now choose ε0 small enough such that ||f1ε − f ||p < δ3
=⇒ ||fε − f ||p ≤ δ ∀ε(δ) ≤ε0,
δ2
.
9.3.2 The Dual Space of Lp
Explicit example of T ∈ (Lp)∗:
Tg : Lp → R, Tgf =
∫f g dx
Tgf is defined by Holder, and
|Tgf | ≤ ||f ||p||g||p′
Thus,
||Tg|| = supf ∈Lp,f 6=0
|Tgf |||f ||p
≤ supf ∈Lp,f 6=0
||f ||p||g||p′||f ||p
= ||g||p′
In fact ||Tg|| = ||g||p′ . This follows by choosing f = |g|p′−2g.
Theorem 9.3.12 (): Let 1 ≤ p <∞,
Then J : Lp′ → (Lp)∗, J(g) = Tg is onto with ||J(g)|| = ||g||p′ . In particular, Lp is reflexive
for 1 < p <∞.
Remark: (L1)∗ = L∞, (L∞)∗ 6= L1.
9.3.3 Weak Convergence in Lp
• 1 ≤ p <∞, fj f weakly in Lp
∫fjg →
∫f g ∀g ∈ Lp′
• p =∞, fj * f weakly∗ in L∞
∫fjg →
∫f g ∀g ∈ L1
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces221
Examples: Obstructions to strong convergence. (1) Oscillations. (2) Concentrations.
• Oscillations: Ω = (0, 1), fj(x) = sin(jx). fj * 0 in L∗∞, but fj 9 0 in L∞. (||0 −sin(jx)||∞ = 1).
• Concentration: g ∈ C∞c (Rn), fj(x) = j−n/pg(j−1x). So,∫|fj |p dx =
∫j−ngp(j−1x) dx =
∫|g|p dx 1 < p <∞
So we see that fj 0 in Lp, but fj 9 0 in Lp.
Last Lecture: We showed (Lp)∗ = Lp′, where 1
p + 1p′ = 1 and 1 ≤ p <∞. With this we can
talk about
• Weak-* convergence in L∞.
• Weak convergence in Lp, 1 ≤ p <∞.
So, as an example take weak convergence in L2(Ω), uj u in L2(Ω) if∫
Ω
ujv dx →∫
Ω
uv dx ∀v ∈ L2(Ω).
Obstructions to strong convergence which have weak convergence properties are oscillations
and concentrations.
9.4 Morrey and Companato Spaces
Loosely speaking, Morrey Spaces and Companato Spaces characterize functions based on
their “oscillations” in any given neighborhood in a set domain. This will be made clear by the
definitions. Philosophically, Morrey spaces are a precursor of Companato Spaces which in
turn are basically a modern re-characterization of the Holder Spaces introduced earlier in the
chapter. This new characterization is based on integration; and hence, is easier to utilize, in
certain proofs, when compared against the standard notion of Holder continuity.
It would have been possible to omit these spaces from this text altogether (and this is the
convention in books at this level); but I have elected to discuss these for the two following
reasons. First, the main theorem pertaining to the imbedding of Sobolev Spaces into Holder
spaces will be a natural consequence of the properties of Companato Spaces; this is much
more elegant, in the author’s opinion, than other proofs using standard properties of Holder
continuity. Second, we will utilize these spaces in the next chapter to derive the classical
Schauder regularity results for elliptic systems of equations. Again, it is the authors opin-
ion that these regularity results are proved much more intuitively via the use of Companato
spaces as opposed to the more standard proofs of yore. For the following introductory ex-
position, we essentially follow [Giu03, Gia93].
Per usual, we consider Ω to be open and bounded in Rn; but in addition to this, we will
also assume for all x ∈ Ω and for all ρ ≤ diam(Ω) that
|Bρ(x) ∩Ω| ≥ Aρn, (9.2)
where A > 0 is a fixed constant. This is, indeed, a condition on the boundary of Ω that pre-
cludes domains with “sharp outward cusps”; geometrically, it can be seen that any Lipschitz
domain will satisfy (9.2). With that we make our first definition.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
222Chapter 9: Function Spaces
Definition 9.4.1: The Morrey Space Lp,λ(Ω) for all real p ≥ 1 and λ ≥ 0 is defined as
Lp,λ(Ω) :=
u ∈ Lp(Ω)
∣∣∣∣∣ ||u||Lp,λ(Ω) <∞,
where
||u||pLp,λ(Ω)
:= supx0 ∈ Ω
0 < ρ < diam(Ω)
(ρ−λ
∫
Ω(x0,ρ)
|u|p dx)
with Ω(x0, ρ) = B(x0, ρ) ∩Ω.
It is left up to the reader to verify that Lp,λ(Ω) is indeed a BS. One can easily see from
the definition that u ∈ Lp,λ(Ω) if and only if there exists a constant C > 0 such that
∫
Ω(x0,ρ)
|u|p dx < Cρλ, (9.3)
for all x0 ∈ Ω with 0 < ρ < ρ0 for some fixed ρ0 > 0. Sometimes, (9.3) is easier to use than
the original definition of a Morrey Space. Also, since x0 ∈ Ω is arbitrary in (9.3), it really
doesn’t matter what we set ρ0 > 0, as Ω is assumed to be bounded (i.e. it can be covered
by a finite number of balls of radius ρ0).
In addition to this, we get that Lp,λ(Ω) ∼= Lp(Ω) for free from the definition. Now, we
come to our first theorem concerning these spaces.
Theorem 9.4.2 (): Ifn − λp≥ n − µ
qand p ≤ q, then Lq,µ(Ω) ⊂−→ Lp,λ(Ω).
Proof: First note that the index condition above can be rewritten as
µp
q+ n − np
q≥ λ.
With this in mind, the rest of the proof is a straight-forward calculation using Holder’s
inequality:
∫
Ω(x0,ρ)
|u|p dx Holder≤(∫
Ω(x0,ρ)
|u|q dx)p/q
|Ω(x0, ρ)|1−p/q
≤ α(n)1−p/q ·(||u||q
Lq,µ(Ω)· ρµ)p/q
· ρn(1−p/q)
≤ α(n)1−p/q · ||u||qLq,µ(Ω)
· ρµp/q+n−np/q
≤ α(n)1−p/q · ||u||qLq,µ(Ω)
· ρλ.
Note in the third inequality utilizes the assumption p ≤ q. The above calculation implies
that
||u||pLp,λ(Ω)
≤ C||u||qLq,µ(Ω)
,
i.e. Lq,µ(Ω) ⊂−→ Lp,λ(Ω).
Corollary 9.4.3 (): Lp,λ(Ω) ⊂−→ Lp(Ω) for any λ > 0.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces223
Proof: This is a direct consequence of the previous theorem.
The next two theorems are more specific index relations for Morrey Spaces, but are very
important properties nonetheless.
Theorem 9.4.4 (): If dim(Ω) = n, then Lp,n(Ω) ∼= L∞(Ω).
Proof: First consider u ∈ L∞(Ω). In this case,
∫
Ω(x0,ρ)
|u|p dx Holder≤ C · |Ω(x0, ρ)| · ||u||pL∞(Ω)
= C · α(n)ρn · ||u||pL∞(Ω)
<∞,
i.e. u ∈ Lp,n(Ω). If, on the other hand, u ∈ Lp,n(Ω), then, considering for a.e. x0 ∈ Ω we
have
u(x0) = limρ→0
1
|Ω(x0, ρ)|
∫
Ω(x0,ρ)
u dx
via Lebesgue’s Theorem, it is calculated that
|u(x0)| ≤ limρ→0−∫
Ω(x0,ρ)
|u| dx ≤ limρ→0
(−∫
Ω(x0,ρ)
|u|p dx)1/p
<∞.
Thus, we conclude u ∈ L∞(Ω).
Theorem 9.4.5 (): If dim(Ω) = n, then Lp,λ(Ω) = 0 for λ > n.
Proof: Fix an arbitrary x0 ∈ Ω, using Lebesgue’s Theorem, we calculate
|u(x0)| ≤ limρ→0
1
|Ω(x0, ρ)|
∫
Ω(x0,ρ)
|u|p dx
Holder≤ limρ→0
|Ω(x0, ρ)|1−1/p
|Ω(x0, ρ)|1/p(∫
Ω(x0,ρ)
|u|p dx)1/p
≤ limρ→0
diam(Ω)1−1/p
|Ω(x0, ρ)|1/p(∫
Ω(x0,ρ)
|u|p dx)1/p
≤ limρ→0
C · ρ−n/pρλ/p
= limρ→0
C · ρλ−n/p = 0,
since λ− n > 0. As x0 ∈ Ω is arbitrary, we see that u ≡ 0. .
Now, that a few of the basic properties of Morrey Spaces have been elucidated, we will
now consider a closely related group of spaces.
Definition 9.4.6: The Companato Spaces, Lp,λ(Ω), are defined as
Lp,λ(Ω) :=
u ∈ Lp(Ω)
∣∣∣∣∣ [u]Lp,λ(Ω) <∞,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
224Chapter 9: Function Spaces
where
[u]pLp,λ(Ω):= sup
x0 ∈ Ω
0 < ρ < diam(Ω)
ρ−λ(∫
Ω(x0,ρ)
|u − ux0,ρ|p dx)
with
ux0,ρ := −∫
Ω(x0,ρ)
u dx.
First, one will notice that [u]Lp,λ(Ω) is a seminorm; this is seen from the fact that
[u]Lp,λ(Ω) = 0 if and only if u is a constant. Thus, we define
|| · ||Lp,λ(Ω) := || · ||Lp(Ω) + [ · ]pLp,λ(Ω)
as the norm on Lp,λ(Ω), making such a BS. The verification of this is left to the reader.
In analogy with theorem 9.4.2, we have the following.
Theorem 9.4.7 (): Ifn − λp≥ n − µ
qand p ≤ q, then Lq,µ(Ω) ⊂−→ Lp,λ(Ω)
Proof: See proof of theorem 9.4.2.
Just from looking at the definitions, naive intuition suggests that there should be some
correspondence between Morrey and Companato Spaces, but this correspondence is by no
means obvious. The next two theorems will actually elucidate the correspondence between
Morrey, Companato and Holder Spaces, but we first prove a lemma that will subsequently
be needed in the proofs of the aforementioned theorems.
Lemma 9.4.8 (): If Ω is bounded and satisfies (9.2), then for any R > 0,
|ux0,Rk − ux0,Rh | ≤ C · [u]Lp,λ(Ω)Rλ−n/pk , (9.4)
where Ri := 2−i · R and 0 < h < k .
Proof: First we fix an arbitrary R > 0 and define r such that 0 < r < R. Now, we
calculate that
rn|ux0,R − ux0,r |p
≤ |Ω(x0, r)|α(n)
|ux0,R − ux0,r |p
≤ 1
α(n)
∫
Ω(x0,r)
|ux0,R − ux0,r |p dx
≤ 2p−1
α(n)
[∫
Ω(x0,r)
|u(x)− ux0,R|p + |u(x)− ux0,r |p dx]
≤ 2p−1
α(n)
[∫
Ω(x0,R)
|u(x)− ux0,R|p dx +
∫
Ω(x0,r)
|u(x)− ux0,r |p dx]
≤ 2p−1
α(n)· [Rλ + rλ] · [u]pLp,λ(Ω)
≤ C · Rλ[u]pLp,λ(Ω).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces225
Dividing by rn and taking the p-th root of this result yields
|ux0,R − ux0,r | ≤ C · [u]Lp,λ(Ω) · Rλ/p · r−n/p. (9.5)
From here, we will utilize the definition of Ri and (9.5) to ascertain
|ux0,Ri− u
x0,Ri+1| ≤ C · [u]Lp,λ(Ω) · 2i(n−λ)/p+n/p · R(λ−n)/p
≤ C · [u]Lp,λ(Ω) · 2i(n−λ)/p · R(λ−n)/p
In the last equality, the 2n/p has been absorbed into the constant. Trudging on, we finish
the calculation to discover
|ux0,R− u
x0,Rh+1| ≤
h∑
i=0
|ux0,Ri− u
x0,Ri+1|
≤ C · [u]Lp,λ(Ω)R(λ−n)/p
h∑
i=0
2i(n−λ)/p
= C · [u]Lp,λ(Ω)R(λ−n)/p
(2(h+1)(n−λ)/p − 1
2(n−λ)/p − 1
)
≤ C · [u]Lp,λ(Ω)R(λ−n)/ph+1 . (9.6)
The conclusion follows as (9.6) is valid for any h ≥ 0 and R > 0 was chosen arbitrarily.
Theorem 9.4.9 (): If dim(Ω) = n and 0 ≤ λ ≤ n, then Lp,λ(Ω) ∼= Lp,λ(Ω).
Proof: First suppose u ∈ Lp,λ(Ω) and calculate
ρ−λ∫
Ω(x0,ρ)
|u − ux0,ρ|p dx
≤ 2p−1ρ−λ[∫
Ω(x0,ρ)
|u|p + |ux0,ρ|p dx]
≤ 2p−1ρ−λ[∫
Ω(x0,ρ)
|u|p dx + |Ω(x0, ρ)| · |ux0,ρ|p]
Holder≤ 2p−1ρ−λ[∫
Ω(x0,ρ)
|u|p dx + |Ω(x0, ρ)| · −∫
Ω(x0ρ)
|u|p dx]
≤ 2pρ−λ∫
Ω(x0ρ)
|u|p dx <∞.
Thus, we see that u ∈ Lp,λ(Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
226Chapter 9: Function Spaces
Now assume u ∈ Lp,λ(Ω). We find
ρ−λ∫
Ω(x0,ρ)
|u|p dx
= ρ−λ∫
Ω(x0,ρ)
|u − ux0,ρ + ux0,ρ|p dx
≤ 2p−1ρ−λ[∫
Ω(x0,ρ)
|u − ux0,ρ|p dx + |Ω(x0, ρ)| · |ux0,ρ|p]
≤ 2p−1
ρ−λ
∫
Ω(x0,ρ)
|u − ux0,ρ|p dx + C · ρn−λ|ux0,ρ|p.
(9.7)
To proceed with (9.7), we must endeavor to estimate estimate |ux0,ρ|. To do this, we consider
0 < r < R along with the calculation
|ux0,ρ|p ≤ 2p−1|ux0,R|p + |ux0,R − ux0,ρ|p. (9.8)
Thus, we now seek to estimate the RHS. To do this, we utilize (9.6) with diam(Ω) < R ≤2 · diam(Ω) chosen such that Rh = ρ for some h > 0. Inserting this result into (9.8) and
(9.8) in (9.7) yields
ρ−λ∫
Ω(x0,ρ)
|u|p dx ≤ 2p−1
[ρ−λ
∫
Ω(x0,ρ)
|u − ux0,ρ|p dx
+ C · ρn−λ2p−1 ·∣∣∣∣−∫
Ω(x0,R)
u dx
∣∣∣∣p
+ C · 2p−1 · [u]pLp,λ(Ω)
]<∞.
Thus, u ∈ Lp,λ(Ω).
Unlike Lp,λ(Ω), L∞(Ω) ( Lp,n(Ω). To show this we consider the following.
Example 8: Let us look at consider log x , where x ∈ (0, 1). On any given sub-interval
[a, b] ⊂ (0, 1), we have
∫ b
a
| log x | dx = a · log a − b · log b + (b − a).
Now, if we multiply by ρ−n = 2b−a , we have
2
b − a
∫ b
a
| log x | dx = 2
(a · log a − b · log b
b − a + 1
).
If we can show that the RHS is bounded for any 0 < a < b < 1, then we will have shown
that log x ∈ L1,1((0, 1)). But choosing b := e−i and a := e−(i+1), where i > 0 shows that
limi→∞
2e i
1− e−1
∫ e−i
e−(i+1)
| log x | dx = 2 · limi→∞
(i − e−1(i + 1)
1− e−1+ 1
)
= 2 · limi→∞
((1− e−1)i − e−1
1− e−1+ 1
)
= ∞.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces227
Thus, log x /∈ L1,1(Ω) which is in agreement with theorem 9.4.4. Now considering the same
interval [a, b], it is also easily calculated that
2
b − a
∫ b
a
| log x − (log x)(b+a)/2,(b−a)/2| dx = 4,
i.e. log x ∈ L1,1(Ω). On the other hand, it is obvious that log x /∈ L∞((0, 1)). Thus,
L∞(Ω) ( Lp,n(Ω).
As we have seen Companato spaces can be identified with Morrey spaces in the interval
0 ≤ λ < n. For λ > n we have the following
Theorem 9.4.10 (): For n < λ ≤ n + p holds
Lp,λ(Ω) ∼= C0,α(Ω) with α =λ− np
whereas for λ > n + p
Lp,λ(Ω) = constants.
Proof: From lemma 9.4.8 and the assumption that λ > n we observe that ux0,Rk is a
Cauchy sequence for all x0 ∈ Ω. Thus by Lebesgue’s theorem
ux0,Rk → u(x0) (k →∞),
for all x ∈ Ω. Thus we know ux0,Rk <∞ exists. Next, we note that ux,Rk =: vk(x) ∈ C(Ω).
This can proved from a standard continuity argument noting that u ∈ L1(Ω). Taking h →∞in (9.6) yields
|ux0,Rk − u(x0)|C· [u]Lp,λ(Ω)R
λ−np
k . (9.9)
From this, we deduce that vk → u is of uniform convergence in Ω; hence u is continuous.
Taking u to represent u, we know go one to show that u is Holder continuous. First, take
x, y ∈ Ω and set R = |x − y |, and α =λ− np
.
Ω
x
R y
2R
,K$/ &'< 9 &$/\)GO-/8)N)9 &'< 8 X" "Q57&>-/-/ ',9h,/ |u(x)− u(y)|
|u(x)− u(y)| ≤ |ux,2R − u(x)|︸ ︷︷ ︸≤C·Rα
+|ux,2R − uy,2R|+ |uy,2R − u(y)|︸ ︷︷ ︸≤C·Rα
;
-/-/ ',9h "*,(2$/%/5/"*9;1)V9 &'< & < 'OM[ K$/
|Ω(x, 2R) ∩Ω(y, 2R)| · |ux,2R − uy,2R|
≤∫
Ω(x,2R)|u(z)− ux,2R| dz +
∫
Ω(y,2R)|u(z) − uy,2R| dz
≤(∫
Ω(x,2R)|u(z)− ux,2R|p dz
)1/p
|Ω(x, 2R)|1−1/p
+
(∫
Ω(y,2R)|u(z)− uy,2R|p dz
)1/p
|Ω(y, 2R)|1−1/p
≤ 21+n+(n−λ)/p · %'E9(Ω)n︸ ︷︷ ︸
C
·[u]Lp,λ(Ω)Rλ/pRn(1−1/p)
= C · [u]Lp,λ(Ω)Rα.
&'< 7 +*"
R = |x− y| ?'M 13P7$"\"!MO
[u]C0,α(Ω) = supx, y ∈ Ω
x 6= y
|u(x)− u(y)||x− y|α ≤ C · [u]Lp,λ(Ω).
&'< 77
Figure 9.1: Layout for proof of Theorem 9.4.10
We start first by approximating |u(x)− u(y)|:
|u(x)− u(y)| ≤ |ux,2R − u(x)|︸ ︷︷ ︸≤C·Rα
+|ux,2R − uy,2R|+ |uy,2R − u(y)|︸ ︷︷ ︸≤C·Rα
;
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
228Chapter 9: Function Spaces
the approximations in the underbraces come from (9.9). Next we figure
|Ω(x, 2R) ∩Ω(y , 2R)| · |ux,2R − uy,2R|
≤∫
Ω(x,2R)
|u(z)− ux,2R| dz +
∫
Ω(y,2R)
∣∣u(z)− uy,2R∣∣ dz
≤(∫
Ω(x,2R)
|u(z)− ux,2R|p dz)1/p
|Ω(x, 2R)|1−1/p
+
(∫
Ω(y,2R)
∣∣u(z)− uy,2R∣∣p dz
)1/p
|Ω(y , 2R)|1−1/p
≤ 21+n+(n−λ)/p · diam(Ω)n︸ ︷︷ ︸C
·[u]Lp,λ(Ω)Rλ/pRn(1−1/p)
= C · [u]Lp,λ(Ω)Rα.
(9.10)
As R = |x − y |, we have thus shown:
[u]C0,α(Ω) = supx, y ∈ Ω
x 6= y
|u(x)− u(y)||x − y |α ≤ C · [u]Lp,λ(Ω). (9.11)
We still aren’t quite done. Remember, we are seeking our conclusion in the BS definition
of C0,α(Ω) and Lp,λ(Ω); the last calculation only pertains to the seminorms. Let us pick
an arbitrary x ∈ Ω and define v(x) := |u(x)| − infΩ |u(x)|. Obviously, v(x) > 0 and exists
xn ⊂ Ω such that v(xn)→ 0 as n →∞. Thus, it is clear that
|v(x)| = limn→∞
|v(x)− v(xn)|
≤ (diam(Ω))α|v(x)− v(xn)||x − xn|α
≤ C(diam(Ω))α · [v ]Lp,λ(Ω)
= C(diam(Ω))α · [u]Lp,λ(Ω).
The last equality comes from the fact that |u(x)| and |v(x)| differ only by a constant. With
the last calculation in mind we finally realize
|u(x)| = |v(x)|+ infΩ|u(x)|
≤ C(diam(Ω))α · [u]Lp,λ(Ω) +1
|Ω|
∫
Ω
|u| dx
Holder≤ C(diam(Ω))α · [u]Lp,λ(Ω) + |Ω|−1/p
(∫
Ω
|u|p dx) 1
p
.
Combining this result with (9.11), we conclude
supx∈Ω|u(x)|+ [u]C0,α(Ω) ≤ C
(||u||Lp(Ω) + [u]Lp,λ(Ω)
),
completing the proof!
To conclude the section, two corollaries are put to the reader’s attention. Both of these
are proved by the exact same argument as the previous theorem.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces229
Corollary 9.4.11 (): If
∫
Ω(x0,ρ)
|u − ux0,ρ|p dx ≤ Cρλ
for all ρ ≤ ρ0 (instead of ρ < diam(Ω)), then the conclusion of theorem (9.4.10) is true
with Ω replaced by Ω(x0, ρ0), i.e. local version of (9.4.10).
Corollary 9.4.12 (): If
∫
Bρ(x0)
|u − ux0,ρ|2 dx ≤ Cρλ
for all x0 ∈ Ω and ρ < dist(x0, ∂Ω), then for λ > n, we conclude that u is Holder continuous
in every subdomain of Ω.
9.4.1 L∞ Companato Spaces
Given the characterizations of Morrey and Companato spaces, one may ask if there is a
generalization of these spaces. It turns out there is, but only parts (corresponding to ranges
of indices) of these spaces are useful for our studies.
To construct our generalization, consider
Pk :=
T (x) |T (x) =
∑
|β|≤kCβ · xβ, Cβ = const.
,
i.e. the set of all polynomials of degree k or less. Also, let us define Tk,xu as the kth order
Taylor expansion of u about x :
Tk,xu(y) =∑
|β|≤k
Dβu(x)
β!· (y − x)β.
Now we define our generalization.
Definition 9.4.13: The Lp Companato Spaces, Lp,λk (Ω), are defined as
Lp,λk (Ω) :=
u ∈ Lp(Ω)
∣∣∣∣∣ [u]Lp,λk (Ω)<∞
,
where
[u]Lp,λk (Ω):= sup
x0 ∈ Ω
0 < ρ < diam(Ω)
ρ−(k+λ) · infT∈P‖
(∫
Ω(x0,ρ)
|u − T |p dx)
It is easy to notice that Lp,λ0 (Ω) is just Lp,λ0 (Ω). One could also fit Morrey spaces into
this picture by defining P∗ = 0 and denote the Morrey space Lp,λ∗ (Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
230Chapter 9: Function Spaces
Now since Pk ⊂ Pk+1, we readily ascertain that Lp,λk+1(Ω) ⊂ Lp,λk (Ω). At this point we
are inclined to then ask is it useful to consider k > 0 as Companato spaces can already char-
acterize Holder continuous functions very well. In addition, if we did need a weaker criterion,
we could always appeal to Sobolev spaces. We could counter act the weakening affect of in-
creasing k by increasing p, but such index balancing isn’t immediately useful within the scope
of this book. What does turn out to be useful, is to consider the “other end of the index
spectrum”, i.e. take k arbitrary with p = ∞ (as opposed to taking p arbitrary with k = 0:
the regular Companato space case). With p = ∞, it turns out that L∞,αk (Ω) ∼= Ck,α(Ω);
but before we prove this, we first need the following lemma.
Warning: The following proof is somewhat technically involved and can be skipped with-
out loosing continuity. That being said, the proof itself presents some clever tools that can
be employed to bounding Holder norms.
Lemma 9.4.14 (): If u ∈ Ck,α(Ω) with Ω bounded, then
ρkx · supΩ|Dβu| ≤ C(ε) · sup
Ω|u|+ ε · ρα+k
x · [u]Ck,α(Ω),
for any x ∈ Ω and R > 0, where ρx := dist(x, ∂Ω) and k = |β|.
Proof: First we fix an arbitrary x ∈ Ω and consider 12 ≥ λ, we define R := λ · ρx ; we
will fix λ later in the proof. Upon selecting a coordinate system, we select a group of n line
segments, denoted li , of length 2R with centers at x , which are parallel to the xi axis. Let
us denote the endpoints of each li by x ′i and x ′′i . With these preliminaries, we proceed in four
main steps.
!M ", Pk ⊂ Pk+1? M (!%',,&H",A3 Lp,λk+1(Ω) ⊂ Lp,λk (Ω)
<+T:/ "-J,7:M[+;,,,%=( "Bc "X,$" )V$/[`" %/
k > 0"
9R-332"-3"!L,!%'&:3! e %/7, $$")G$/ " P&M ,S< 4%/%', Y?F)NM %' %>%.LM[!BZ, Y?3M X$/ %4,MT&'"Z-/-J!:f85J PR"-3"!< Q$/ %;$/ TOM[!B/,/R J[)0,!",/
k5 &:,!",/p
?5/$/#"$R,%/653E,/X " N,9R9;%'E,&L$" )G$//MO,/,R"-JO)k/ "#508B0< 3#%/ "N$/;$/250T$" )G$/S?K "1" %/ # %W)TR,%/c"-J$/9%? S< K<RB
k5/,&cMO,
p = ∞ i"-/-0K"%LB8,/
p5/,&MO,
k = 0N2K$/E 9R-33h"-31!" < ,
p = ∞ ?k,\$/"2$/*3 L∞,αk (Ω) ∼= Ck,α(Ω)
'k5/$/Z5J )G:M[:-/!P6/ "!?0M["O%(2)G, MO,/; 9R9h'<
",& , )G, MO,/C-/8)# "\"9;MO3Z// !,,&(,7P,P%c%.!45J"B8,-/-J%:MO,$/ K",/X ,7$/, &< 3N5J,/X" %@?KO-/8)k,"F)J-/" ""9;1 P\ "O3*!(5019R-/ &%(R5J$/%',/ e %/Q9;"!<
u ∈ Ck,α(Ω)-/&'*)
Ω (,,% ( '*)+%,
ρkx · supΩ|Dβu| ≤ C(ε) · sup
Ω|u|+ ε · ρα+k
x · [u]Ck,α(Ω),
5",x ∈ Ω
" ,,R > 0 ( - )+% % ρx :=
& '(x, ∂Ω)
",,k = |β| -
,":M D 5/,&x ∈ Ω
%D" %/ 12 ≥ λ
? M +%/R := λ · ρx
'M[CMO,, λ
E:, C-/ )< -J=" ,/W4 %',3"&'"9(?3M[L" ZhK$/-g)
n,,:"K9;7"!?J%/%
li?0)# /K
2RMO,
"2x
?kMO/ c-3, xi
8 " < YZ$"*%/6%'-J,7")!
li57&
x′i%
x′′i< ,."-/,,9R,3 " ?JM[:-/ %.,4)G$/Z9h,
"-" <
Ω
ρy
x
ρx
yR
,K$/'&'< !&$/Q)iO-/ )N) 9R9h &'< 8 &
© "!$#&%(')+*,*.-0/&1&1&2Figure 9.2: Layout for proof of Lemma 9.4.14
Step 1: For the first step we claim that
supΩ|Du| ≤ 1
Rsup
Ω|u|. (9.12)
To prove this we first note that since u ∈ Ck,α(Ω), we can utilize the mean value
theorem to state ∃x∗i ∈ li such that
Diu(x∗) =u(x ′′i )− u(x ′i )x ′′ − x ′ ≤ 1
Rsup
Ω|u|.
Step 2: Next we prove the claim that
ρx · supΩ|Diu| ≤
1
λsup
Ω|u|+ 2232λ · ρ2
x · supBR(x)
|Di ju|.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.4 Morrey and Companato Spaces231
We start with calculating
|Diu(x)| ≤ |Diu(x∗j )|+ |Diu(x)−Diu(x∗j )|
≤ |Diu(x∗j )|+∫ x
x∗j
|Di ju(t)| dt
Using (9.12) on the first term on the RHS and Holder’s inequality on the second we
get
|Diu(x)| ≤ 1
Rsup
Ω|u|+ R · sup
BR(x)|Di ju|
≤ 1
Rsup
Ω|u|+ R · ρ−2
y · ρ2y · sup
BR(x)|Di ju|.
We know that ρy + R ≥ ρx Longr ightar row ρy ≥ ρx − R = (1 − λ)ρx ≥ ρx2 . Using
this the above becomes
|Diu(x)| ≤ 1
λρx· sup
Ω|u|+ 22λ
ρx· ρ2y · sup
BR(x)|Di ju|.
Since we know that ρy ≤ R + ρx Longr ightar row ρy ≤ 3ρx . Thus,
supΩ|Diu| ≤
1
λρx· sup
Ω|u|+ 2232λ · ρx · sup
y∈BR(x)|Di ju|.
Multiplying by ρx gets the conclusion of this step.
Step 3: Our last claim is that
ρ2x · sup
Ω(|Di ju|) ≤
ρxλ· supBR(x)
|Diu|+ 223α+2 · λα · ρα+2x [u]C2,α(Ω). (9.13)
For this we start with
|Di ju| ≤ |Di ju(x∗j )|+ |Di ju(x)−Di ju(x∗j )|.
Again using (9.12) we get
|Di ju| ≤1
Rsup
Ω|Diu|+ Rα ·
|Di ju(x)−Di ju(x∗j )||x − x∗j |α
≤ 1
Rsup
Ω|Diu|+ Rα · ρ−α−2
y · ρα+2y · [u]C2,α(Ω)
≤ 1
λρxsup
Ω|Diu|+
22λα
ρ2x
ρα+2y · [u]C2,α(Ω)
≤ 1
λρxsup
Ω|Diu|+ 223α+2λα · ραx · [u]C2,α(Ω)
In the above, we again used the previously stated fact that ρx2 ≤ ρy ≤ 3ρx . As in the
previous step, we take the supremum of the LHS and multiply both sides by ρ2x to get
the claim for Step 3.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
232Chapter 9: Function Spaces
Step 4: We are almost done. Next we realize that the proof in Step 3 can be easily adapted to
also state
ρx · supΩ|Diu| ≤
1
λ· sup
Ω|u|+ 2 · 3α+1λα · ρα+1
x · [u]C1,α(Ω). (9.14)
Now λ ≤ 12 has not been fixed in any of these calculations. So for given ε > 0 let us
pick ε = 223α+2λα in (9.13) to get
ρ2x · sup
Ω(|Di ju|) ≤
22/α31+2/α
ε1/α· ρx · sup
BR(x)|Diu|+ ε · ρα+2
x · [u]C2,α(Ω).
For (9.13) we choose λ so that ε = 2232λ:
ρx · supΩ|Diu| ≤
2232
εsup
Ω|u|+ ε · ρ2
x · supy∈BR(x)
|Di ju|.
Combining these last two equations we get
ρ2x · sup
Ω|Di ju| ≤
22+2/α33+2/α
ε1+1/αsup
Ω|u|
+22/α31+2/α
ε1/α−1· ρ2x supBR(x)
|Di ju(y)|+ ε · ρα+2x · [u]C2,α(Ω).
We can immediately algebraically deduce that
ρ2x · sup
Ω|Di ju| ≤
ε−222+2/α33+2/α
ε1/α−1 − 22/α31+2/αsup
Ω|u|
+ε1/α
ε1/α−1 − 22/α31+2/α· ρα+2x [u]C2,α(Ω).
Picking
ε > 22
1−α3α+21−α
guarantees everything stays positive. Taking this result with (9.14) we can apply an
induction argument to ascertain the conclusion.
Theorem 9.4.15 (): Given ∂Ω ∈ Ck,α and u ∈ Ck,α(Ω), there exists 0 ≤ C1 ≤ C2 < ∞such that
C1 · [u]Ck,α(Ω) ≤ [u]L∞,αk (Ω) ≤ C2 · [u]Ck,α(Ω),
i.e. Ck,α(Ω) ∼= L∞,αk (Ω).
Proof: First, we claim that
||u − Tx,ku||L∞(Ω(x,ρ)) ≤ C2 · ρk+α · [u]Ck,α(Ω),
for all x ∈ Ω. So we first fix y ∈ Ω(x, ρ) to find
u(y)− Tx,ku(y) =∑
|β|=k
Dβu(z)−Dβu(x)
β!· (z − x)β, (9.15)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.5 The Spaces of Bounded Mean Oscillation233
where z := tx + (1− t)y for some t ∈ (0, 1).
It needs to be noted that in order for the last equation to be valid, we may need to ex-
tend u to the convex hull of Ω(x, ρ). This is why we require the boundary regularity, to
guarantee that such a Holder continuous extension of u exists. We do not prove this here,
but the idea of the proof is similar to that of extending Sobolev functions.
Continuing on, given our definition of z , we readily see |z − x | ≤ |y − x | ≤ ρ. This combined
with (9.15) immediately indicates that
|u(y)− Tx,ku(y)| ≤ C2 · ρk+α · [u]Ck,α(Ω),
proving the RHS of (9.15).
For the other inequality, we first notice that ∀T ∈ Pk , [u − T ]Ck,α(Ω) = [u]Ck,α(Ω). We
know this since DβT = const. when |β| = k . Now, from lemma 9.4.14 we have
ρk · supΩ(x,ρ)
|Dβ(u − T )| ≤ C1
(ρα+k · [u]Ck,α(Ω) + sup
Ω(x,ρ)|Dβ(u − T )|
).
Taking the infimum of T ∈ Pk and dividing by ρk+α gives us
ρ−α supx,y∈Ω(x,ρ)
|Dβu(x)−Dβu(y)| ≤ C1
([u]Ck,α(Ω) + [u]L∞,αk (Ω)
),
proving the left inequality, concluding the proof.
9.5 The Spaces of Bounded Mean Oscillation
Note: These are also know as BMO Spaces or John-Nirenberg Spaces. Let Q0 be a cube in
Rn and u a measurable function on Q0. We define
u ∈ BMO(Q0)
if and only if
|u|BMO(Q0)
:= supQ
∫
Q∩Q0
−∫
Q∩Q0
|u − uQ∩Q0| dx <∞.
It is easy to see that we can also take as definition
|u|BMO(Q0)
= supQ⊂Q0
−∫
Q
|u − uQ| dx
and that u ∈ BMO(Q0), if and only if u ∈ L1,n(Q0).
Theorem 9.5.1 (John/Nirenberg): There exist constants C1 > 0 and C2 > 0 depending
only on n, such that the following holds: If u ∈ BMO(Q0), then for all Q ⊂ Q0
∣∣x ∈ Q0 | (u − uQ)(x) > t∣∣ ≤ C1 · exp
(−C2
|u|BMO(Q0)
)|Q|.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
234Chapter 9: Function Spaces
Proof: It is sufficient to give the proof forQ = Q0 (because u ∈ BMO(Q0)Rightar rowu ∈BMO(Q) for Q ⊂ Q0). Moreover the inequality is homogenous and so we can assume that
|u|BMO(Q0)
= 1.
For α > 1 = supQ⊂Q0−∫Q |u − uQ| dx we use Calderon-Zygmund argument to get a
sequence Q(1)j with
i.) α < −∫Q
(1)j
|u − uQ0| dx ≤ 2nα
ii.) |u − uQ0| ≤ α on Q0 \
⋃j Q
(1)j .
Then we also have∣∣∣∣u
Q(1)j
− uQ0
∣∣∣∣ ≤ −∫
Q(1)j
|u − uQ0| dx ≤ 2nα
and
∑
j
|Q(1)j | ≤
1
α
∫
Q0
|u − uQ0| dx.
Take one of the Q(1)j . Then as |u|
BMO(Q0)= 1 we have
−∫
Q(1)j
∣∣∣∣u − uQ
(1)j
∣∣∣∣ dx ≤ 1 < α
and we can apply Calderon-Zygmund argument again. Thus, there exists a sequence Q(2)j
with
|u(x)− uQ0| ≤
∣∣∣∣u(x)− uQ
(1)j
∣∣∣∣+ |uQ
(1)j
− uQ0| ≤ 2 · 2nα
for a.e. x on Q0 \⋃Q
(2)k and
∑
j
|Q(2)j | ≤
1
α
∑
j
∫
Q(1)j
∣∣∣∣u − uQ
(1)j
∣∣∣∣ dx
≤ 1
α
∑
j
|Q(1)j | (u ∈ BMO and |u|
BMO= 1)
≤ 1
α2
∫
Q0
|u − uQ0| dx
≤ 1
α2|Q0|.
Repeating this argument one gets by induction: For all k ≥ 1 there exists sequence Q(k)j
such that
|u − uQ0| ≤ k2nα a.e. on Q0 \
⋃
j
Q(k)j
and
∑
j
|Q(k)j | ≤
1
αk
∫
Q0
|u − uQ0| dx.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.5 The Spaces of Bounded Mean Oscillation235
Now for t ∈ R+ either 0 < t < 2nα or there exists k ≥ 1 such that 2nαk < t < 2nα(k + 1).
In the first case
|x ∈ Q0 | |u(x)− uQ0| > t ≤ |Q0| ≤ |Q0|e−Ate2nAα
(o ≤ 2nα− tRightar row1 ≤ eA(2nα−t) for some A > 0); in the secont case
|x ∈ Q0 | |u(x)− uQ0| > t
≤ |x ∈ Q0 | |u(x)− uQ0| > 2nαk
≤∑
j
|Q(k)j | ≤
1
αk
∫
Q0
|u − uQ0| dx
≤ e(1− t2nα) lnα
∫
Q0
|u − uQ0|u − u
Q0| dx ≤ αe − lnα
2nα |Q0|
=: αe−At |Q0| (we need − k ≤ 1− t
2nαand α−k = e−k lnα).
Corollary 9.5.2 (): If u ∈ BMO(Q0), then u ∈ Lp(Q0) for all p > 1. Moreover
supQ⊂Q0
(∫
Q
|u − uQ|p dx
)1/p
≤ C(n, p) · |u|BMO(Q0)
.
Proof:∫
Q
|u − uQ|p dx = p
∫ ∞
0
tp−1|x ∈ Q | |u − uQ| > t| dt
≤ p · C1
∫ ∞
0
tp−1 exp
(− C2
|u|BMO(Q)
t
)|Q| dt
= p · C1
(C2
|u|BMO(Q)
)−p|Q|∫ ∞
0
e−ttp−1 dx ≤ C(n, p)|u|pBMO(Q)
.
Finally we have
Theorem 9.5.3 (Characterization of BMO): The following statements are equivalent:
i.) u ∈ BMO(Q0)
ii.) there exists constatns C1, C2 such that for all Q ⊂ Q0 and all s > 0
|x ∈ Q | |u(x)− uQ| > s| ≤ C1 exp(−C2s)|Q|
iii.) there exist constants C3, C4 such that for all Q ⊂ Q0,
∫
Q
(exp(C4|u − uQ |)− 1) dx ≤ C3|Q|
iv.) if v = exp(C4u), then
supQ⊂Q0
(−∫
Q
v dx
)(−∫
Q
1
vdx
)≤ C5.
(We can take C1 = C3 = C5 = C(n) and C2 = 2C4 = C(n)|u|−1BMO(Q0)
.)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
236Chapter 9: Function Spaces
9.6 Sobolev Spaces
Take Ω ⊆ Rn
Definition 9.6.1: f ∈ L1(Ω) is said to be weakly differentiable if
∃g1, . . . , gn ∈ L1(Ω) such that
∫
Ω
f∂φ
∂xidx = −
∫
Ω
giφ dx ∀φ ∈ C∞c (Ω)
Notation: Df = (g1, . . . , gn), gi = Di f = ∂f∂xi
. If f ∈ C1(Ω), then Df = ∇f .
Definition 9.6.2: The Sobolev space W 1,p(Ω) is
W 1,p(Ω) = f ∈ Lp(Ω), Df exists, Di f ∈ Lp(Ω), i = 1, . . . , n
Proposition 9.6.3 (): The space W 1,p(Ω) is a Banach space with the norm
||f ||1,p =
(∫
Ω
|f |p + |Df |p dx) 1
p
1 ≤ p <∞,
where
|Df | =
(n∑
i=1
|Di f |2) 1
2
.
In the p =∞ case, we define
||f ||1,∞ = ||f ||∞ + ||Df ||∞
Proof: The triangle inequality follows from Minkowski’s inequality.
Completeness of W 1,p(Ω): Take fj Cauchy in W 1,p(Ω) =⇒ fj and Di fj are both Cauchy
in Lp(Ω). By the completeness of Lp, fj → f ∈ Lp(Ω) and Di fj → gi ∈ Lp(Ω).
Need: Df = g,
Call Di fj = (gj)i → gi ∈ Lp(Ω). By definition
∫
Ω
fj∂φ
∂xidx = −
∫
Ω
(gj)iφ dx ∀φ ∈ C∞c (Ω)
Cauchy ↓ ↓∫
Ω
f∂φ
∂xidx = −
∫
Ω
giφ dx
Remark: W 1,p(Ω) is a Hilbert space with
(u, v) =
∫
Ω
(uv +Du ·Dv) dx
Thus, we can use the Riesz-Representation Theorem and Lax-Milgram in this space.
Examples:Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces237
|x|
−1 0 1 0
1
• f (x) = |x | on (−1, 1)
We will prove f ∈ W 1,1(−1, 1) and
g = Df =
−1 x ∈ (−1, 0)
1 x ∈ (0, 1)
To show this, look at∫ 1
−1
f φ′ dx = −∫ 1
−1
gφ dx = −[∫ 0
−1
φ dx +
∫ 1
0
φ dx
]
=
∫ 0
−1
φ dx −∫ 1
0
φ dx
But we also have∫ 1
−1
f φ′ dx = −∫ 0
−1
xφ′ dx +
∫ 1
0
xφ′ dx
=
∫ 0
−1
φ dx + 0−∫ 1
0
φ dx + 0√
Thus, g is the weak derivative of f .
•
f (x) = sgn(x) =
−1 x ∈ (−1, 0)
1 x ∈ (0, 1)
−1
0
1
−1 0 1
Our intuition says that f ′ = 2δ0 /∈ L2(−1, 1). Thus, we see to prove f is not weakly
integrable. So, suppose otherwise, g ∈ L1(−1, 1), Df = g.
∫ 1
−1
gφ dx = −∫ 1
−1
f φ′ dx =
∫ 0
−1
φ′ dx −∫ 1
0
φ′ dx
= φ(0)− φ(−1)− [φ(1)− φ(0)]
= 2φ(0)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
238Chapter 9: Function Spaces
0
1
−1 0 1
Now choose φi such that φ ≥ 0, φ ≤ 1, φj(0) = 1, and φj(x)→ 0 a.e.
With this choice of φj we have
2φj(0) =
∫ 1
−1
gφj dx
=⇒ 2 = limj→∞
2φj(0) = limj→∞
∫ 1
−1
gφj dx
= 0 contradiction
Where the last equality is a result of Lebesgue’s dominated convergence theorem.
For higher derivatives, α is a multi-index. g = Dαf if
∫
Ω
f Dαφ dx = (−1)|α|∫
Ω
gφ dx ∀φ ∈ C∞c (Ω)
Definition 9.6.4:
W k,p(Ω) = f ∈ Lp(Ω), Dαf exists, Dαf ∈ Lp(Ω), |α| ≤ k
Sobolev functions are like classical functions. We will address the following in the next
few lectures
• Approximation
• Calculus. Specifically product and chain rules.
• Boundary values
• Embedding Theorems. In the HW we showed that
||u||C0,α(R) ≤ |u(x0)|+ C||u′||Lp(R)
which gives us the intuition that
W 1,p(R) ⊂−→ C0,α(R).
Recall: W 1,p(Ω) means we have weak derivatives in Lp(Ω), g = Dαf
∫
Ω
f Dαφ dx = (−1)|α|∫
Ω
gφ dx ∀φ ∈ C∞c (Ω)
• n = 1 W 1,p ⊂ C0,α with α = 1− 1p .
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces239
• n ≥ 2 W n,p not holder continuous.
Lemma 9.6.5 (): f ∈ L1loc(Ω), α is a multi-index, and φε(x) = ε−nφ(ε−1x) with usual
mollification.
If Dαf exists, then Dα(φε ∗ f )(x) = (φε ∗Dαf )(x) ∀x with dist(x, ∂Ω) > ε.
Proof: Without loss of generality take |α| = 1, Dα = ∂∂xi
.
∂
∂xi(φε ∗ f )(x) =
∂
∂xi
∫
Rnφε(x − y)f (y) dy
=
∫
Rn
∂
∂xiφε(x − y)f (y) dy
= −∫
Rn
∂
∂yiφε(x − y)f (y) dy
——
Now since φ(x − y) ∈ C∞c (Ω), we apply the definition of the weak partial derivative to get
——
=
∫
Rnφε(x − y)
∂
∂yif (y) dy
=
(φε ∗
∂
∂xif
)
Proposition 9.6.6 (): f , g ∈ L1loc(Ω). Then g = Dαf ⇐⇒ ∃fm ∈ C∞(Ω) such that fm → f
in L1loc(Ω) and Dαfm → g in L1
loc(Ω).
Proof:
(⇐=) fm → f and Dαfm → g∫
Ω
fm ·Dαφ dx = (−1)|α|∫
Ω
Dαfm · φ dx
(see following aside) ↓ ↓∫
Ω
f ·Dαφ dx = (−1)|α|∫
Ω
gφ dx
∣∣∣∣∫
Ω
fm ·Dαφ dx −∫
Ω
f ·Dαφ dx∣∣∣∣ ≤
∫
Ω
|fm − f | |Dαφ|︸ ︷︷ ︸C
dx
≤ C
∫
supp(φ)
|fm − f | dx → 0 given
(=⇒) Use convolution and lemma 8.37.
9.6.1 Calculus with Sobolev Functions
Proposition 9.6.7 (Product Rule): f ∈ W 1,p(Ω), g ∈ W 1,p′(Ω), then f g ∈ W 1,1(Ω) and
D(f g) = Df · g + f ·Dg.
Proof: By Holder, f g, Df · g, and f ·Dg are in L1(Ω). Explicitly, this is∫
Ω
|f g| dx ≤ ||f ||p||g||p′
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
240Chapter 9: Function Spaces
Now suppose p < ∞ (by symmetry) and fε is the convolution of f defined by fε = φε ∗ f .
fε → f in L1loc(Ω). Let Ω′ =supp(φ), ε <dist(Ω′, ∂Ω′).
∫
Ω
fεg ·Diφ dx =
∫
Ω
g ·Di(fεφ) dx −∫
Ω
g ·Di fε · φ dx
(def. of weak deriv. of g) = −∫
Ω
Dig · fεφ dx −∫
Ω
g ·Di fε · φ dx
=⇒∫
Ω
fεg ·Diφ dx = −∫
Ω
(Di fε · g + fε ·Dig)φ dx
Now fε → f in L1loc and by Lemma 8.37 Di fε → Di f in L1
loc as ε→ 0. Thus,∫
Ω
f g ·Diφ dx = −∫
Ω
(Di f · g + f ·Dig)φ dx
Now by applying the definition of weak derivatives, we get
Di(f g) = Di f · g + f ·Dig
Proposition 9.6.8 (Chain Rule): f ∈ C1, f bounded and g ∈ W 1,p, then (f g) ∈ W 1,p(Ω)
and
D(f g) = (f ′ g)Dg
Remark: The same assertion holds if f is continuous and piecewise C1 with bounded
derivative.
Proof: gε = φε ∗ g, gε → g in Lploc, Dgε → Dg in Lploc. (Assume p < ∞, even p = 1
would be sufficient).
D(f gε) = (f ′ gε)Dgε → (f ′ g)Dg in L1loc
Take Ω′ ⊂⊂ Ω, ε small enough.∫
Ω
|(f ′ gε)Dgε − (f ′ g)Dg| dx
≤∫
Ω
|(f ′ gε)(Dgε −Dg)| dx +
∫
Ω
|(f ′ g − f ′ gε)Dg| dx
Now f ′ gε is bounded since f ′ is bounded and Dgε −Dg → 0 in L1loc. Thus the first term
on the RHS goes to 0 as ε→ 0. Since gε → g in L1loc, gε → g in measure and thus exists a
subsequence such that gε → g a.e. Thus we can apply Lebesgue’s dominated convergence
theorem to see the second term in the RHS goes to 0 as ε→ 0. Thus we have∫
Ω
|(f ′ gε)Dgε − (f ′ g)Dg| dx = 0
and by the application of the definition of a weak derivative, we attain the result.
Lemma 9.6.9 (): Let f + = maxf , 0, f − = minf , 0, f = f + + f −, if f ∈ W 1,p, then f +
and f − are in W 1,p,
Df + =
Df on f > 0
0 on f ≤ 0
D|f | =
Df on f > 0
−Df on f < 0
0 f = 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces241
Corollary 9.6.10 (): Let c ∈ R, E = x ∈ Ω, f (x) = c. Then Df = 0 a.e. in E.
Theorem 9.6.11 (): 1 ≤ p <∞, then W k,p(Ω) ∩ C∞(Ω) is dense in W k,p(Ω).
Proof: Choose Ω1 ⊂⊂ Ω2 ⊂⊂ Ω2 ⊂⊂ · · · ⊂⊂ Ω, e.g. Ωi =x ∈ Ω : dist(x, ∂Ω) < 1
i
.
Define the following open sets: Ui = Ωi+1 \Ωi−1
U2
Ω
· · ·· · ·
...
Ω1
Ω2
Ω3
U2
Then⋃i Ui = Ω, and thus Ui is an open cover of Ω. This cover is said to be locally finite,
i.e., ∀x ∈ Ω there exists ε > 0, such that B(x, ε) ∩ Ui 6= 0 only for finitely many i .
Theorem 9.6.12 (Partition of Unity): Ui is an open and locally finite cover of Ω. Then
there exists ηi ∈ C∞c (Ui), ηi ≥ 0, and
∞∑
i=1
ηi = 1 in Ω
Intuitive Picture in 1D:
1
Proof of 8.43 cont.:
ηi is a partition of unity corresponding to Ui , fi = ηi f ∈ W k,p(Ω).
supp(fi) ⊂ Ui = Ωi+1 \Ωi−1.
Choose εi > 0, such that φεi ∗ fi =: gi satisfies
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
242Chapter 9: Function Spaces
• supp(gi) ⊂ Ωi+2 \Ωi−2
• ||gi − fi ||W k,p(Ω) ≤ δ2i
, δ > 0 fixed.
• gi ∈ C∞c (Ω)
Now define
g(x) =
∞∑
i=1
gi(x)
(this sum is well-defined since gi(x) 6= 0 only for finitely many i).
||g − f ||W k,p(Ω) =
∣∣∣∣∣
∣∣∣∣∣∞∑
i=1
gi −∞∑
i=1
ηi f
∣∣∣∣∣
∣∣∣∣∣W k,p(Ω)
≤∞∑
i=1
||gi − ηi f ||W k,p(Ω)
≤∞∑
i=1
δ
2i= δ
Remark: The radius of the ball on which you average to get g(x) is proportional to dist(x, ∂Ω).
Theorem 9.6.13 (Fundamental Theorem of Calculus): i.) g ∈ L1(a, b), f (x) =
∫ x
a
g(t) dt
then f ∈ W 1,1(a, b) and Df = g.
ii.) If f ∈ W 1,1(a, b) and g = Df , then f (x) = C +
∫ x
a
g(t) dt for almost all x ∈ (a, b),
where C is a constant.
Remark: In particular, there exists a representative of f that is continuous.
Proof of ii.): Choose fε ∈ W 1,1(a, b)∩C∞(a, b) such that fε → f in W 1,1(Ω), i.e., fε → f in
L1(a, b) and Dfε → Df = g in L1(a, b). There exists a subsequence such that fε(x)→ f (x)
a.e.
Fundemental theorem for fε
fε(x)− fε(a) =
∫ x
a
f ′ε (t) dt
a.e. ↓ ↓ ↓
f (x)− L =
∫ x
a
g(t) dt
where L is finite and f (a) is chosen such that L = limx→a =: f (a) (Only changes f on a set
of measure zero).
9.6.2 Boundary Values, Boundary Integrals, Traces
Definition 9.6.14: 1 ≤ p <∞
1. W 1,p0 (Ω) = closure of C∞c (Ω) in W 1,p(Ω)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces243
2. p =∞: W 1,∞0 (Ω) is given by
W 1,∞0 (Ω) =
f ∈ W 1,∞(Ω) : ∃fi ∈ C∞c (Ω) s.t.
fj → f in W 1,1loc (Ω), sup ||fj ||W 1,∞(Ω) ≤ C <∞
3. If f0 ∈ W 1,p(Ω). Then f = f0 on ∂Ω if
f − f0 ∈ W 1,p0 (Ω)
Remark: It’s not always possible to define boundary values.
112
Γ
Ω ⊆ R2, Ω = B(0, 1) \ B(
0, 12
)\ Γ , where Γ =
x = (x1, 0), 1
2 < x1 < 1
. u(x1, x2) = φ =
polar angle. Ω not “on one side” of Γ .
Definition 9.6.15: Lipschitz boundary
f is Lipschitz continuous if f ∈ C0,1, i.e.
|f (x)− f (y)| ≤ L|x − y |,
for some constant L. ∂Ω is said to be Lipschitz, if ∃Ui , i = 1, . . . , m open such that
∂Ω ⊂ ⋃mi=1 Ui , where in any Ui , we can find a local coordinate system such that ∂Ω ∩ Ui is
the graph of a Lipschitz function and Ω ∩ Ui lies on one side of the graph.
Remark: Analogous definition with Ck,α boundary. ∂Ω∪Ui = graph of a Ck,α functions.
The following figure should make clear the concept of graphs of boundaries.
Definition 9.6.16: f : ∂Ω → R. f is measurable/integrable if f (g(y)) measurable inte-
grable.
Boundary integral: Choose U0 such that Ω ⊂ ⋃mi=0 Ui . Now pick a partition of unity, ηicorresponding to these Ui such that
∑mi=1 ηi = 1 on Ω
∫
∂Ω
f dHn−1 =
∫
∂Ω
f dS =
m∑
i=1
∫
∂Ω
ηi f dS
To make it clear what the integral in the last equality means, assume f has support in one
Ui , then in that Ui with coordinates properly redefined,∫
∂Ω
f dS =
∫
Rn−1
f (y , g(y))√
1 + |Dg(y)|2 dy .
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
244Chapter 9: Function Spaces
∂Ω
y1
y1
y2y2
g1(y) = |y |
U1
U4
U3
U2
R
Rn−1
Ω
y = (y1, . . . , yn−1)
∂Ω = graph
g(y)
U0
Ω
Ui
Ui
So, it’s clear how to extend this to general f since ηi f does have support in Ui .
Remark:√
1 + |Dg(y)|2 dy can be defined for g Lipschitz (derivatives well defined up except
for a countable number of “kinks”). We will assume ∂Ω is C1.
Definition 9.6.17:
Lp(∂Ω) =
f : ∂Ω→ R,
∫
∂Ω
|f |p dS <∞
L∞(∂Ω) =
f : ∂Ω→ R, ess sup
∂Ω|f | <∞
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces245
9.6.3 Extension of Sobolev Functions
Theorem 9.6.18 (Extension): Ω ⊆ Rn, bounded, ∂Ω Lipschitz. Suppose Ω′ open neigh-
borhood of Ω, i.e. Ω ⊂⊂ Ω′, then exists a bounded and linear operator
E : W 1,p(Ω)→ W 1,p0 (Ω′) such that Ef
∣∣∣Ω
= f
Ω′Ω
reflection
cut-off function
Outline of the Proof: Reflect and multiply by cut-off functions.
C1 boundary: ∃ transformation that maps ∂Ω locally to a hyperplane “straightening or
flattening the boundary”.
x0
x-coordinate
y0
Φ
Ψ
y-coordinate
R
x0
Ω
∂Ω
Rn−1
Define Φ by
yi = xi = Φi(x), i = 1, . . . , n − 1
yn = xn − g(x) = Φn(x), x = (x1, . . . , xn−1)
=⇒ det(DΦ(x)) = 1
∂Ω flat near x0, ∂Ω ⊂ xn = 0B(x0, R) such that B(x0, R) ∩ ∂Ω ⊂ xn = 0
B+ = B ∩ xn ≥ 0 ⊂ Ω
B− = B ∩ xn ≤ 0 ⊂ Rn \Ω
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
246Chapter 9: Function Spaces
1.
B−
x0
B+
Rn−1
Ω
xn
2. Approximate: suppose first that u ∈ C∞(Ω) extend u by higher order reflection:
u(x) =
u(x) if x ∈ B+ := u+
−3u(x ,−xn) + 4u(x ,− xn2
)if x ∈ B− := u−
3. Check this is C1 across the boundary.
• u continuous xn = 0• all tangential derivatives are continuous across xn = 0
∂u
∂xn(x) = 3
∂u
∂xn(x − xn)− 2
∂u
∂xn
(x ,−xn
2
)
∂u
∂xn
∣∣∣∣∣xn=0
=∂u
∂xn(x , 0)
4. Calculate
||u||Lp(B) ≤ C||u||Lp(B+)
||Du||Lp(B) ≤ C||Du||Lp(B+)
5. Straightening out of the boundary:
Ψy-coordinate
u∗(y) = u (Ψ(y))
y y0
B
Φ(B)
Φ
x-coordinate
xu
x0
W = Ψ(B)
Choose B(y0, R0) ⊂ Φ(B). Construction for u∗ (see above figure). Now extend u∗
such that
||u∗||W 1,p(B) ≤ C||u∗||W 1,p(B+)
= C||u∗||W 1,p(B+)Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces247
Change coordinates from y back to x to get an extension u on W that satisfies
||u||W 1,p(W ) ≤ C||u||W 1,p(W∩Ω)
≤ ||u||W 1,p(W )
Remark: The constant C depends (by the chain rule) on ||Φ||∞, ||Ψ||∞, ||DΦ||∞, and
||DΨ||∞ but not on u.
6. ∀x0 ∈ ∂Ω, W (x0), u extension, ∂Ω is compact. Thus cover ∂Ω by finitely many sets
W1, . . . ,Wm (with corresponding ui extensions), choose W0 such that Ω ⊆ ⋃mi=0Wi ,
u0 = u and ηi the corresponding partition of unity.
u(x) =
m∑
i=0
ηiui
Making W (x0) smaller if necessary, we may assume that W (x0) ⊂ Ω′, thus u = 0
outside Ω′. By Minkowski
||u||W 1,p(Ω′) ≤ C||u||W 1,p(Ω)
7. Define Eu = u, bounded and linear on C∞(Ω)
8. Approximation: Refinement of approximation result.
Find um ∈ C∞(Ω) such that um → u in W 1,p(Ω)
||um − un||W 1,p(Ω′) = ||Eum − Eun||W 1,p(Ω′)
≤ C||um − un||W 1,p(Ω) → 0
=⇒ Eum is Cauchy in W 1,p(Ω′), and thus has a limit in this space, call it Eu.
9.6.4 Traces
Theorem 9.6.19 (): Ω bounded domain in Rn, Lipschitz boundary. 1 ≤ p <∞. Then exists
a unique linear bounded operator
T : W 1,p(Ω)→ Lp(∂Ω)
such that
Tu = u∣∣∣∂Ω
if u ∈ W 1,p(Ω) ∩ C0(Ω)
Proof: u ∈ C1(Ω), x0 ∈ ∂Ω, ∂Ω flat near x0. ∃B(x0, R) such that ∂Ω ∩ B(x0, R) ⊆xn = 0. Define B = B
(x0,
R2
)and let Γ = ∂Ω ∩ B.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
248Chapter 9: Function Spaces
∂Ωx0
R2
R
Γ
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces249
x = (x1, . . . , xn−1) ∈ xn = 0Choose ξ ∈ C∞c (B(x0, R)), ξ ≥ 0, ξ ≡ 1 on B
B+
xn
R
· · · · · ·
x x0
(x , x∗n )
x ∈ Rn−1
Note: From the above figure we see
ξ|u|p(x , 0) = (ξ|u|p)(x , 0)− (ξ|u|p)(x , x∗n )
=
∫ 0
x∗n
(ξ|u|p)xn dxn (9.16)
So we have for the main calculation:∫
Γ
|u(x , 0)|p dx ≤∫
Rn−1
ξ|u|p dx
= −∫
B+
(ξ|u|p)xn dx
= −∫
B+
|u|pξxn + ξ|u|p−1
︸ ︷︷ ︸Lp′
sgn(u) uxn︸︷︷︸Lp
dx
Holder≤ −∫
B+
|u|pξxn dx + C
(∫
B+
|u|p dx) 1
p′(∫
B+
|uxn |p dx) 1
p
Young’s≤ C
∫
B+
(|u|p + |Du|p) dx
General situation: non-linear change of coordinates just as before∫
Γ
|u|p dS ≤ C
∫
C
(|u|p + |Du|p) dx
where Γ ⊂ ∂Ω contains the points x0.
Now, ∂Ω compact, choose finitely many open sets Γi ⊂ ∂Ω such that ∂Ω ⊂ ⋃mi=1 Γi with∫
Γi
|u|p dS ≤ C
∫
C
(|u|p + |Du|p) dx
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
250Chapter 9: Function Spaces
=⇒ If we define for u ∈ C1(Ω), Tu = u∣∣∣∂Ω
=⇒ ||u||Lp(∂Ω) ≤ C||u||W 1,p(Ω)
Approximation: u ∈ W 1,p(Ω) choose uk ∈ C∞(Ω) such that
uk → u in W 1,p(Ω)
Estimate
||Tum − Tun||Lp(Ω) ≤ C||um − un||W 1,p(Ω)
=⇒ Tum is a Cauchy sequence in Lp(∂Ω).
Define
Tu = limk→∞
Tuk in Lp(∂Ω)
If u ∈ W 1,p(Ω) ∩ C0(Ω) then the approximation uk converges uniformly on Ω to u and
Tu = u∣∣∣∂Ω
.
Remark: W 1,∞(Ω) is space of all functions that have Lipschitz continuous representations.
Theorem 9.6.20 (Trace-zero functions in W 1,p): Assume Ω is bounded and ∂Ω is C1.
Suppose furthermore that u ∈ W 1,p(Ω). Then
u ∈ W 1,p0 (Ω) ⇐⇒ Tu = 0
Proof: (=⇒) We are given u ∈ W 1,p0 (Ω). Then by the definition, ∃um ∈ C∞c (Ω) such
that
um → u in W 1,p(Ω).
clearly Tum = 0 on ∂Ω. Thus,
sup∂Ω|Tu − Tum| ≤ C sup
∂Ω|u − um| = 0
Thus, Tu = limm→∞ Tum = 0.
(⇐=) Assume Tu = 0 on ∂Ω. As usual assume that ∂Ω is flat around a point x0 and
using standard partitions of unity we may assume for the following calculations that
u ∈ W 1,p(Rn+), supp(u) ⊂⊂ Rn+,T u = 0, on ∂Rn+ = Rn−1
Since Tu = 0 on Rn−1, there exists um ∈ C1(Rn+) such that
um → u in W 1,p(Rn+) (9.17)
and
Tum = um
∣∣∣Rn−1→ 0 in Lp(Rn−1) (9.18)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces251
Now, if x ∈ Rn−1, xn ≥ 0, we have by the fundamental theorem of calculus
|um(x , xn)| = |um(x , 0) + um(x , xn)− um(x , 0)|
≤ |um(x , 0)|+∫ xn
0
|um,xn(x , t)| dt
Holder≤ |um(x , 0)|+ xp−1p
n
(∫ xn
0
|um,xn(x , t)|p dt) 1
p
≤ |um(x , 0)|+ xp−1p
n
(∫ xn
0
|Dum(x , t)|p dt) 1
p
From this, we have∫
Rn−1
|um(x , xn)|p dx ≤ 2p−1
(∫
Rn−1
|um(x , 0)|p dx + xp−1n
∫ xn
0
∫
Rn−1
|Dum(x , t)|p dxdt)
Letting m →∞ and using (9.17) and (9.18), we have∫
Rn−1
|u(x , xn)|p dx ≤ 2p−1xp−1n
∫ xn
0
∫
Rn−1
|Du(x , t)|p dxdt (9.19)
Approximation: Next, let ξ ∈ C∞(R). We require that ξ ≡ 1 on [0, 1] and ξ ≡ 0 on R\ [0, 2]
with 0 ≤ ξ ≤ 1 and we writeξm(x := ξ(mxn) (x ∈ Rn+)
wm := u(x)(1− ξm(x))
Then we have
wm,xn = uxn(1− ξm)−muξ′Dxwm = Dxu(1− ξm)
∂Ω
1m
2m
xn
ξ(xn)
Figure 9.3: A possible ξ
Consequently, we have∫
Rn+|Dwm −Du|p dx =
∫
Rn+| − ξmDu −muξ′|p dx
≤∫
Rn+|ξm|p|Du|p dx + Cmp
∫ 2/m
0
∫
Rn−1
|u|p dxdxn
=: A+ B
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
252Chapter 9: Function Spaces
Note: In deriving B, C is picked to bound |ξ′|p. Also the integral is restricted since ξ′ = 0 if
xn > 2/m by construction.
Now
limm→∞
A = 0
since ξm 6= 0 only if 0 ≤ xn ≤ 2/m. To estimate the term B, we use (9.19)
B ≤ C · 2p−1mp(
2
m
)p(∫ 2/m
0
∫
Rn−1
|Du|p dxdxn)
≤ C
∫ 2/m
0
∫
Rn−1
|Du|p dxdxn → 0 as m →∞
Thus, we deduce Dwm → Du in Lp(Rn+). Since clearly wm → u in Lp(R+n ) (is clear from
construction of wm). We can now conclude
wm → u in W 1,p(Rn+)
But wm = 0 if 0 < xn < 1/m. We can therefore mollify the wm to produce function
um ∈ C∞c (Rn+) such that um → u in W 1,p(Rn+). Hence w ∈ W 1,p0 (Rn+). .
9.6.5 Sobolev Inequalities
Theorem 9.6.21 (Sobolev-Gagliardo-Nirenberg Inequality): Ω is a bounded set in Rn. If
u ∈ W 1,1(Rn), then
||u||L
nn−1 (Rn)
≤ 1√n
∫
Rn|Du| dx.
Remark: In the above theorem means thatW 1,p0 (Ω) ⊆ Lp∗(Ω) and ||u||Lp∗(Ω) ≤ C||u||W 1,p
0 (Ω).
Proof: C∞c (Ω) dense in W 1,p0 (Ω).
First estimate for smooth functions, then pass to the limit in the estimate. It thus suffices
to prove
||f ||Lp∗(Ω) ≤ C||Df ||Lp(Ω) ∀f ∈ C∞c (Ω)
Special case: p = 1, p∗ = nn−1
n = 2: p∗ = 2
f (x) = f (x1, x2) =
∫ x1
−∞D1f (t, x2) dt
=⇒ |f (x)| ≤∫ x1
−∞|D1f (t, x2)| dt ≤
∫ ∞
−∞|D1f (t, x2)| dt = g1(x2)
Same idea
|f (x)| ≤∫ ∞
−∞|D2f | dx2 = g2(x1)
|f (x)|2 ≤ g1(x2)g2(x1)Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces253
||f ||22 =
∫
R2
|f (x)|2 dx =
∫
R2
g1(x2)g2(x1) dx1dx2
=
∫
Rg1(x2) dx2
∫
Rg2(x1) dx1
=
(∫
R
∫
R|D1f | dx1dx2
)(∫
R
∫
R|D2f | dx2dx1
)
≤(∫
R2
|Df | dx)2
=⇒ ||f ||L2(R2) ≤ ||Df ||L1(R2).
General Case:
|f (x)| ≤ gi(x) =
∫ ∞
−∞|Di f | dxi
We have
|f (x)| nn−1 ≤
(n∏
i=1
gi
) 1n−1
∫
R|f (x)| n
n−1 dx1 ≤∫
R
(n∏
i=1
gi
) 1n−1
dx1
Now taking Holder for n − 1 terms with pi = n − 1
[g1(x)]1n−1
∫
R
(n∏
i=2
gi
) 1n−1
dx1 ≤ [g1(x)]1n−1
n∏
i=2
(∫
Rgi(x) dx1
) 1n−1
integrate with respect to x2 to get
∫
R
∫
R|f (x)| n
n−1 dx1dx2
≤∫
R[g1(x)]
1n−1
n∏
i=2
(∫
Rgi(x) dx1
) 1n−1
dx2
=
(∫
Rg2(x) dx1
) 1n−1
[∫
R[g1(x)]
1n−1
n∏
i=3
(∫
Rgi(x) dx1
) 1n−1
dx2
]
Holder≤(∫
Rg2(x) dx1
) 1n−1(∫
Rg1(x) dx2
) 1n−1
n∏
i=3
(∫
R
∫
Rgi(x) dx1dx2
) 1n−1
Integrate∫dx3 · · ·
∫dxn + Holder
∫
Rn|f (x)| n
n−1 dx ≤n∏
i=1
(∫
Rn−1
gi(x) dx1 · · · dxi−1dxi+1 · · · dxn) 1
n−1
(def. of gi) =
n∏
i=1
(∫
Rn|Di f | dx
) 1n−1
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
254Chapter 9: Function Spaces
So this implies that
||f ||L
nn−1 (Rn)
≤n∏
i=1
(∫
Rn|Di f | dx
) 1n
=
∫
Rn|Df | dx = ||Df ||L1(Rn)
We can do a little better than this by observing
(n∏
i=1
ai
) 1n
≤ 1
n
n∑
i=1
ai
(which is just a generalized Young’s inequality). So we have
||f ||L
nn−1 (Rn)
≤ 1
n
∫
Rn
n∑
i=1
|Di f | dx ≤1√n
∫
Rn|Df | dx
Since
n∑
i=1
ai =√n
√√√√n∑
i=1
a2i
In the general case use: |f |γ with γ suitably chosen.
Theorem 9.6.22 (Poincare’s Inequality): Ω ⊆ Rn bounded, then
||u||Lp(Ω) ≤ C||Du||Lp(Ω) ∀v ∈ W 1,20 (Ω), 1 ≤ p ≤ ∞
If ∂Ω is Lipschitz, then
||u − u||Lp(Ω) ≤ C||Du||Lp(Ω) ∀v ∈ W 1,20 (Ω), 1 ≤ p ≤ ∞
where
u = uΩ =1
|Ω|
∫
Ω
u(x) dx = −∫
Ω
u(x) dx
Remark:
1. We need u − u to exclude constant functions
2. C depends on Ω, see HW for Ω = B(x0, R)
This will be an indirect proof with no explicit scaling given. Use additional scaling argument
to find scaling.
Proof: Sobolev-Gagliardo-Nirenberg:
||u||Lp∗(Ω) ≤ CS||Du||Lp(Ω)
Now we apply Holder under the assumption that Ω is bounded:
||u||Lp(Ω) ≤ C(Ω)||u||Lp∗(Ω) ≤ C(Ω)CS||Du||Lp(Ω)Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.6 Sobolev Spaces255
Proof of ||u − u||Lp(Ω) ≤ C||Du||Lp(Ω). Without loss of generality take u = 0, other wise
apply inequality to v = u − u. Now we prove
Dv = Du.
Suppose otherwise ∃uj ∈ W 1,p(Ω), uj = 0, such that
||uj ||Lp(Ω) ≥ j ||Duj ||Lp(Ω)
May suppose, ||uj ||Lp(Ω) = 1
=⇒ 1
j≥ ||Duj ||Lp(Ω) =⇒ Duj → 0 in Lp(Ω)
and
||uj ||W 1,p(Ω) = ||uj ||Lp(Ω) + ||Duj ||Lp(Ω) ≤ 1 +1
j≤ 2
Compact Sobolev embedding (See Section 9.7): ∃ subsequence such that ujk ∈ u in Lp(Ω)
with Du = 0 =⇒ u is constant, 0 =∫
Ω uj dx →∫
Ω u dx =⇒ u = 0 =⇒ 1 = ||ujk ||W 1,p(Ω) →0. Contradiction.
9.6.6 The Dual Space of H10 = W 1,2
0
Definition 9.6.23: Ω ⊂ Rn bounded, H−1(Ω) = (H10)∗ = (W 1,2
0 )∗
Theorem 9.6.24 (): Suppose that T ∈ H−1(Ω). Then there exists functions f0, f1, . . . , fn ∈L2(Ω) such that
T (v) =
∫
Ω
(f0v +
n∑
i=1
fiDiv
)dx ∀v = H1
0(Ω) (9.20)
Moreover,
||T ||2H−1(Ω) = inf
∫
Ω
n∑
i=0
|fi |2 dx, fi ∈ L2(Ω), (9.20) holds
Remark: One frequently works formally with, T = f0 −∑ni=1Di fi
Proof: u, v ∈ H10(Ω),
(u, v) =
∫
Ω
(uv +Du ·Dv) dx
if T ∈ H−1(Ω), then by Riesz, ∃!u0 such that
T (u) = (v , u0) ∀v ∈ H10(Ω)
=
∫
Ω
(vu0 +Dv ·Du0) dx (9.21)
i.e. (9.20) with f0 = u0 and fi = Diu0.
Suppose g0, g1, . . . , gn satisfy
T (v) =
∫
Ω
(vg0 +Div · gi) dx
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
256Chapter 9: Function Spaces
take v = u0 in (9.21)
||u0||2H10(Ω)
= (u0, u0) = T (u0) =
∫
Ω
(u0g0 +
n∑
i=1
Diu0gi
)dx
Holder≤ ||u0||L2(Ω)||g0||L2(Ω) +
n∑
i=1
||Diu0||L2(Ω)||gi ||L2(Ω)
C-S≤ ||u0||H10(Ω)
(∫
Ω
|g0|2 + |g1|2 + · · ·+ |gn|2)dx
(9.20) =⇒∫
Ω
(u0v +Diu0 ·Div) dx ≤ ||v ||H10(Ω)||u0||H1
0(Ω)
Now, we see that
||T ||H−1(Ω) = supv∈H1
0(Ω)
|T (v)|||v ||H1
0(Ω)
≤ ||u0||H10(Ω)
On the other hand, choose v = u0||u0||H1
0(Ω)
T (v) = ||u0||H10(Ω) =⇒ ||T ||H−1(Ω) = ||u0||H1
0(Ω)
9.7 Sobolev Imbeddings
Theorem 9.7.1 (): Ω is a bounded set in Rn. Then the following embeddings are continuous
i.) 1 ≤ p < n, p∗ = npn−p , W 1,p
0 (Ω) ⊂−→ Lp∗(Ω)
ii.) p > n, α = 1− np , W 1,p
0 (Ω) ⊂−→ C0,α(Ω)
Remark:
1. i.) in the above theorem means that W 1,p0 (Ω) ⊆ Lp∗(Ω) and ||u||Lp∗(Ω) ≤ C||u||W 1,p
0 (Ω)
2. “Sobolev number”: Wm,p(Ω), Ω ⊆ Rn, then m − np is the Sobolev number.
Lq(Ω) = W 0,q(Ω), −nq is its Sobolev number.
Philosophy: W 1,p(Ω) ⊂−→ Lq(Ω) if 1 − np ≥ −nq , where maximal q corresponds to
equality.
1− np
= −nq⇐⇒ p − n
p= −n
q⇐⇒ n − p
p=n
q⇐⇒ q =
np
n − p = p∗
3. Analogously Ck,α(Ω) has a Sobolev number k + α
W 1,p(Ω) ⊂−→ C0,α(Ω) if 1− np ≥ 0 + α = α, optimal: α = 1− n
p .
4. More generally: Wm,p(Ω) ⊂−→ Lq(Ω) if m − np ≥ −nq and the optimal q satisfies
1q = 1
p − mn .
Wm,p(Ω) ⊂−→ C0,α(Ω) if m − np ≥ 0 + α = α
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.7 Sobolev Imbeddings257
Proof: C∞c (Ω) dense in W 1,p0 (Ω).
First estimate for smooth functions, then pass to the limit in the estimate. It thus suffices
to prove
||f ||Lp∗(Ω) ≤ C||Df ||Lp(Ω) ∀f ∈ C∞c (Ω)
Special case: p = 1, p∗ = nn−1
n = 2: p∗ = 2
f (x) = f (x1, x2) =
∫ x1
−∞D1f (t, x2) dt
=⇒ |f (x)| ≤∫ x1
−∞|D1f (t, x2)| dt ≤
∫ ∞
−∞|D1f (t, x2)| dt = g1(x2)
Same idea
|f (x)| ≤∫ ∞
−∞|D2f | dx2 = g2(x1)
|f (x)|2 ≤ g1(x2)g2(x1)
||f ||22 =
∫
R2
|f (x)|2 dx =
∫
R2
g1(x2)g2(x1) dx1dx2
=
∫
Rg1(x2) dx2
∫
Rg2(x1) dx1
=
(∫
R
∫
R|D1f | dx1dx2
)(∫
R
∫
R|D2f | dx2dx1
)
≤(∫
R2
|Df | dx)2
=⇒ ||f ||L2(R2) ≤ ||Df ||L1(R2).
General Case:
|f (x)| ≤ gi(x) =
∫ ∞
−∞|Di f | dxi
We have
|f (x)| nn−1 ≤
(n∏
i=1
gi
) 1n−1
∫
R|f (x)| n
n−1 dx1 ≤∫
R
(n∏
i=1
gi
) 1n−1
dx1
Now taking Holder for n − 1 terms with pi = n − 1
[g1(x)]1n−1
∫
R
(n∏
i=2
gi
) 1n−1
dx1 ≤ [g1(x)]1n−1
n∏
i=2
(∫
Rgi(x) dx1
) 1n−1
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
258Chapter 9: Function Spaces
integrate with respect to x2 to get
∫
R
∫
R|f (x)| n
n−1 dx1dx2
≤∫
R[g1(x)]
1n−1
n∏
i=2
(∫
Rgi(x) dx1
) 1n−1
dx2
=
(∫
Rg2(x) dx1
) 1n−1
[∫
R[g1(x)]
1n−1
n∏
i=3
(∫
Rgi(x) dx1
) 1n−1
dx2
]
Holder≤(∫
Rg2(x) dx1
) 1n−1(∫
Rg1(x) dx2
) 1n−1
n∏
i=3
(∫
R
∫
Rgi(x) dx1dx2
) 1n−1
Integrate∫dx3 · · ·
∫dxn + Holder
∫
Rn|f (x)| n
n−1 dx ≤n∏
i=1
(∫
Rn−1
gi(x) dx1 · · · dxi−1dxi+1 · · · dxn) 1
n−1
(def. of gi) =
n∏
i=1
(∫
Rn|Di f | dx
) 1n−1
So this implies that
||f ||L
nn−1 (Rn)
≤n∏
i=1
(∫
Rn|Di f | dx
) 1n
=
∫
Rn|Df | dx = ||Df ||L1(Rn)
We can do a little better than this by observing
(n∏
i=1
ai
) 1n
≤ 1
n
n∑
i=1
ai
(which is just a generalized Young’s inequality). So we have
||f ||L
nn−1 (Rn)
≤ 1
n
∫
Rn
n∑
i=1
|Di f | dx ≤1√n
∫
Rn|Du| dx
Since
n∑
i=1
ai =√n
√√√√n∑
i=1
a2i
In the general case use: |f |γ with γ suitably chosen. This result is referred to as the
Sobolev-Gagliardo-Nirenberg inequality.
Recall: Sobolev-Gagliardo-Nirenberg:
||u||L
nn−1 (Rn)
≤ 1√n||Du||L1(Rn) u ∈ W 1,1
0 (Rn)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.7 Sobolev Imbeddings259
This was for p = 1, now consider
General Case: 1 ≤ p < n, Use p = 1 for |u|γ
∣∣∣∣ |u|γ∣∣∣∣L
nn−1 (Rn)
≤ 1√n
∣∣∣∣ D|u|γ∣∣∣∣L1(Rn)
≤ 1√n
∣∣∣∣ γ|u|γ−1D|u|∣∣∣∣L1(Rn)
Holder ≤ γ√n
∣∣∣∣ |u|γ−1∣∣∣∣Lp′(Rn)· ||Du||Lp(Rn)
Choose γ such that
γn
n − 1= (γ − 1)p′ = (γ − 1)
p
p − 1
⇐⇒ γ
(n
n − 1− p
p − 1
)=−pp − 1
⇐⇒ γnp − n − np + p
(n − 1)(p − 1)= − p
p − 1
⇐⇒ γp − n
(n − 1)(p − 1)=
p
p − 1
⇐⇒ γ =p(n − 1)
n − p > 0
(∫
Rn|u|p∗ dx
) n−1n− p−1
p
≤ (n − 1)p
n − p1√n||Du||Lp(Rn)
Performing some algebra on the exponent, we see
n − 1
n− p − 1
p=n − pnp
=1
p∗
=⇒ ||u||Lp∗(Rn) ≤(n − 1)p
n − p1√n||Du||Lp(Rn) = CS||Du||Lp(Rn)
Remark: The estimate “explodes” as p → n.
This is a proof for W 1,p0 (Ω)
General case with Lipschitz boundary. Extension theorem:
Ω ⊂⊂ Ω′ =⇒ extension u ∈ W 1,p0 (Ω′) such that ||u||W 1,p(Ω′) ≤ C||u||W 1,p(Ω).
Use estimate for u on Ω′
||u||Lp∗(Ω) ≤ ||u||Lp∗(Ω′) ≤ CS||u||W 1,p(Ω′) ≤ CSCE ||u||W 1,p(Ω)
9.7.1 Campanato Imbeddings
Theorem 9.7.2 (Morrey’s Theorem on the growth of the Dirichlet integral): Let u be
in H1,ploc (B1) and suppost that
∫
Bρ(x)
|Du|p dx ≤ ρn−p+pα ∀ρ < dist(x, B1)
then u ∈ C0,αloc (B1).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
260Chapter 9: Function Spaces
Proof: We use Poincare’s inequality:
∫
Bρ(x0)
|u − ux0,ρ|p dx ≤ Cρp∫
Bρ(x0)
|Du|p dx ≤ C · ρn+pα
i.e. u ∈ Lp,λloc (B1) and therefore by 1) u ∈ C0,αloc (B1).
Theorem 9.7.3 (Global Version of (9.7.2)): If u ∈ H1,p(Ω), p > n, then u ∈ L1,n+(1−n/p)(Ω)
=⇒ u ∈ C0,1−n/p(Ω) via Companato’s Theorem.
Proof: For all ρ < ρ0 and a u ∈ H1,p(Ω) we have
∫
Ω(x0,ρ)
|u − ux0,ρ|p dx ≤ Cρp∫
Ω(x0,ρ)
|Du|p dx,
where C only depends on the geometry of Ω; then we fine
∫
Ω(x0,ρ)
|Du| dx ≤(∫
Ω(x0,ρ)
|Du|p dx)1/p
|Ω(x0, ρ)|1−1/p ≤ C · ||u||H1,p(Ω)ρn−n/p.
Again by Poincare’s inequality:∫
Ω(x0,ρ)
|u − ux0,ρ| dx ρ∫
Ω(x0,ρ)
|Du| dx ≤ Cρn+1−n/p||u||H1,p(Ω)
Corollary 9.7.4 (Morrey’s theorem): For p > n, the imbedding
W 1,p(Ω) ⊂−→ C0,1−n/p(Ω)
is a continous operator, i.e.
||u||C0,1−n/p(Ω) ≤ C||u||W 1,p(Ω)
Remark: So, from the above we see
Du ∈ Lp,λ(Ω) =⇒ u ∈ Lp,λ+p(Ω)
(Du ∈ L2,n−2+2α(Ω) =⇒ u ∈ L2,n+2α(Ω)).
Theorem 9.7.5 (Estimates for W 1,p, n < p ≤ ∞): Let Ω be a bounded open subset of
Rn with ∂Ω being C1. Assume n < p ≤ ∞, and u ∈ W 1,p(Ω). Then u has a version
u∗ ∈ C0,γ(Ω), for γ = 1− np , with the estimate
||u∗||C0,γ(Ω) ≤ C||u||W 1,p(Ω).
The constant C depends only on p, n and Ω.
Proof: Since ∂Ω is C1, there exists an extension Eu = u ∈ W 1,p(Rn) such that
u = u in Ω,
u has compact support, and
||u||W 1,p(Ω).
Since u has compact support, we know that there exists um ∈ C∞c (Rn) such that
um → u in W 1,p(Rn)Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.7 Sobolev Imbeddings261
Now from the previous theorem, ||um − ul ||C0,1−n/p(Rn) ≤ C||um − ul ||W 1,p(Rn) for all l , m ≥ 1,
thus there exists a function u∗ ∈ C0,1−n/p(Rn) such that
um → u∗ C0,1−n/p(Rn)
So we see from the last two limits, u∗ = u a.e. on Ω; such that u∗ is a version of u. The
last theorem also implies ||um||C0,1−n/p(Rn) ≤ C||um||W 1,p(Rn) this combined with the last two
limits, we get
||u∗||C0,1−n/p(Rn) ≤ C||u||W 1,p(Rn)
9.7.2 Compactness of Sobolev/Morrey Imbeddings
1 ≤ p < n, W 1,p0 (Ω) ⊂−→ Lq(Ω) is compact if q < p∗ i.e. if uj is a bounded sequence in
W 1,p0 (Ω) then uj is precompact in Lq(Ω), i.e., contains a subsequence that converges in
Lq(Ω).
Proposition 9.7.6 (): X is a BS
i.) A ⊂ X is precompact ⇐⇒ ∀ε > 0 there exists finitely many balls B(xi , ε) such that
A ⊂ ⋃mεi=1B(xi , ε)
ii.) A is precompact ⇐⇒ every bounded sequence in A contains a converging subsequence
(with limit in X).
Theorem 9.7.7 (Arzela-Ascoli): S ⊆ Rn compact, A ⊆ C0(S) is precompact ⇐⇒ A is
bounded and equicontinuous.
Proof: (=⇒) ∀ε > 0 ∃B(f εi , ε) such that A ⊆ ⋃mεi=1B(f εi , ε)
||f ||∞ ≤ ||f − f εi ||+ ||f εi ||∞, f ∈ B(f εi , ε)
≤ ε+ maxi=1,...,mε
||f εi || <∞
|f (x)− f (y)| ≤ 2ε+ |f iε (x)− f iε (y)|≤ 2ε+ max
i=1,...,mε|f iε (x)− f iε (y)|
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
262Chapter 9: Function Spaces
so if |x − y | tends to zero =⇒ equicontinuity.
(⇐=) ∀ε > 0, ||f ||∞ ≤ C ∀f ∈ A
Choose ξi ∈ R, i = 1, . . . , k such that B(0, C) = [−C,C] ⊂ ⋃ki=1B(ξi , ε).
xj ∈ Rn, j = 1, . . . , m such that S ⊂ ⋃mi=1B(xj , ε). Basically we are covering the range and
domain.
Let π : 1, . . . , m → 1, . . . , kLet Aπ = f ∈ A : |f (xj)− ξπ(j)| < ε, j = 1, . . . , m
Choose fπ ∈ Aπ if Aπ 6= 0. If x ∈ S then x ∈ B(xj , ε) for some j and f ∈ Aπ.
Claim: A ⊂ ⋃π B(fπ, ε), i.e. the (finitely many) functions fπ define the centers of the
balls. To show this, we see that
|f (x)− fπ(x)| ≤ |f (x)− f (xj)|+ |f (xj)− ξπ(j)|︸ ︷︷ ︸≤ε
+ |ξπ(j) − fπ(xj)|︸ ︷︷ ︸≤ε
+|fπ(xj)− fπ(x)|
≤ 2ε+ sup|x−y |≤ε
supg∈A|g(x)− g(y)| := rε
rε → 0 as ε→ 0 since A is equicontinuous.
=⇒ f ∈ B(fπ, rε). Suppose you want to find a finite cover A ⊂ ⋃B(fπ, δ), δ > 0 then
choose ε small enough such that rε ≤ δ.
Remark: IfX, Y , and Z are Banach Spaces and we haveX ⊂−→ Y ⊂⊂−→ Z orX ⊂⊂−→ Y ⊂−→ Z,
then X ⊂⊂−→ Z.
Theorem 9.7.8 (Compactness of Sobolev Imbeddings): If Ω ⊂ Rn bounded, then the
following Imbedding is true
i.) 1 ≤ p < n, q < p∗ = npn−p
W 1,p0 (Ω) ⊂⊂−→ Lq(Ω)
Moreover, if ∂Ω is Lipschitz, then the same is true for W 1,p(Ω)
ii.) If Ω ⊂ Rn bounded and satisfies condition () with p > n, 1− np > α,
W 1,p0 (Ω) ⊂⊂−→ C0,α(Ω)
Remark: A special case of the above is W 1,20 (Ω) → L2(Ω); This is called Rellich’s The-
orem.
Proof:
i.) q = 1 fj bounded in W 1,p0 (Ω) need to prove that fj is precompact in L1.
Extend by zero to Rn. Without loss of generality, take fj to be smooth with fj ∈C∞c (Rn), (||fj − fj || ≤ ε).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.7 Sobolev Imbeddings263
1. Aε = φε ∗ fj, where φε is the standard mollifier, φε(x) = ε−nφ(ε−1x).
Then Aε precompact in L1 ∀ε ≥ 0. We now show this.
Idea: Use Arzela-Ascoli
|φε ∗ fj |(x) =
∣∣∣∣∫
Rnφε(x − y)f (y) dy
∣∣∣∣
≤ sup |φε|∫
Rn|f (y)| dy
= sup |φε| · ||fj ||L1(Rn)
≤ sup |φε| · ||fj ||Lp(Rn) < Cε <∞Equicontinuity follows if D(φε ∗ fi) uniformly bounded,
|Dφε ∗ fj |(x) ≤ sup |Dφε| · ||fj ||Lp(Rn) < Cε <∞=⇒ (φε ∗ fj) uniformly bounded hence equicontinuous.
=⇒ precompact.
Step 2: ||φε ∗ fj − fj ||L1(Rn) ≤ Cε ∀j2. (Proof of Step 2)
(φε ∗ fj − fj)(x) =
∫
Rn=B(x,ε)
φε(x − y)(fj(y)− fj(x)) dy
(sub. y = x − εz) =
∫
Rn=B(0,1)
φ(z)(fj(x − εz)− fj(x)︸ ︷︷ ︸∫ 10
ddtfj (x−tεz) dt
) dy
Integrate in x∫
Rn|φε ∗ fj − fj | dx
≤∫
Rn
∫
B(0,1)
φ(z)
∫ 1
0
|Dfj(x − εtz)| · |εz | dtdzdx
(Fubini) = ε
∫
B(0,1)
φ(z)
∫ 1
0
∫
Rn|Dfj(x − tεz)| · |z |︸︷︷︸
≤1
dx
︸ ︷︷ ︸=∫Rn |Dfj |(x) dx
dtdz
≤ ε
(∫
Rn|Dfj |(x) dx
)∫
B(0,1)
φ(z) dz ≤ ε||fj ||L1(Rn) ≤ ε||fj ||Lp(Rn)
≤ εC
=⇒ ||φε ∗ fj − fj ||L1(Rn) ≤ εC uniformly ∀j3. Suppose δ > 0 given, choose ε small enough such that εC < δ
2
Aε = φε ∗ fj is precompact in L1(Rn), for δ2 we find balls B
(gi ,
δ2
)such that
Aε ⊆mε⋃
i=1
B
(gi ,δ
2
)=⇒ fj ⊆
mδ⋃
i=1
B(gi , δ) =⇒ fj
precompact in L1(Rn).
Remark: Evans constructs a converging subsequence.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
264Chapter 9: Function Spaces
4. general q < p∗.Trick is to use the “interpolation inequality”:
||g||Lq(Rn) ≤ ||g||θL1(Rn)||g||1−θLp∗
(Rn)
where 1q = θ
1 + 1−θp∗ . So if fj → f in L1(Rn) then
||fj − f ||Lq(Rn) ≤ ||fj − f ||θL1(Rn)︸ ︷︷ ︸→0
||fj − f ||1−θLp∗
(Rn)︸ ︷︷ ︸≤C
So, we just need to prove the interpolation inequality.
Idea: Use Holder with r = 1qθ with the corresponding conjugate index
f ′ =r
r − 1=
1qθ
1qθ − 1
=1
1− qθ
Thus
(1− θ)q
1− qθ = p∗
So,
∫gq dx =
∫gqθg(1−θ)q dx
Holder≤(∫
g dx
)qθ (∫gp∗dx
)1−qθ
Now take the qth root
||g||Lq(Rn) ≤ ||g||θL1(Rn)||g||(1−qθ)p∗/qLp∗
(Rn)
= ||g||θL1(Rn)||g||1−θLp∗
(Rn)
ii.) General Idea: We will prove that W 1,p(Ω) ⊂−→ C0,µ(Ω) ⊂⊂−→ C0,λ(Ω), where 0 < µ <
λ < 1− np and then we will use the remark following the statement of the Arzela-Ascolli
theorem to conclude W 1,p(Ω) ⊂⊂−→ C0,λ(Ω).
Since 0 < λ < 1− np , we know ∃µ with 0 < µ < λ < 1− n
p ; and by Morrey’s theorem
we know W 1,p(Ω) ⊂−→ C0,µ(Ω). We now only need to show C0,µ(Ω) ⊂⊂−→ C0,λ(Ω). It
is obvious that C0,µ(Ω) ⊂−→ C0,λ(Ω) is continuous, so our proof is reduced to showing
the compactness of this imbedding. We do this in two steps
Step 1. Claim C0,ν(Ω) ⊂⊂−→ C(Ω). Again, the continuity of the imbedding is obvious;
and compactness is the only thing needed to be proven. If A is bounded in
C0,ν(Ω), then we know ||φ||C0,ν(Ω) ≤ M for all φ ∈ A. This obviously implies that
|φ(x)| ≤ M in addition to |φ(x)− φ(y)| ≤ M|x − y |ν for all x, y ∈ Ω. Thus, A is
precompact via the Arzela-Ascoli theorem.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.7 Sobolev Imbeddings265
Step 2. Consider
|φ(x)− φ(y)||x − y |λ =
( |φ(x)− φ(y)||x − y |µ
)λ/µ|φ(x)− φ(y)|1−λ/µ
≤ M · |φ(x)− φ(y)|1−λ/µ
for all φ ∈ A for which A is bounded in C0,λ(Ω). We now use the compactness of
the imbedding in Step 1. by extracting a subsequence of this bounded sequence
converging in C(Ω). With this new sequence () indicates this sequence is Cauchy
and so converges in C0,λ(Ω)
This completes the proof.
9.7.3 General Sobolev Inequalities
Theorem 9.7.9 (General Sobolev inequalities): . Let Ω be a bounded open subset of Rn,
with a C1 boundary.
i.) If
k <n
p, (9.22)
then W k,p(Ω) ⊂−→ Lq(Ω), where
1
q=
1
p− kn, (9.23)
with an imbedding constant depending on k, p, n and Ω.
ii.) If
k >n
p, (9.24)
then Ck−[np
]−1,γ
(Ω) ⊂−→ W k,p(Ω), where
γ =
[np
]+ 1− n
p , if np is not an integer
any positive number < 1, if np is an integer.
with an imbedding constant depending on k, p, n, γ and Ω.
Proof.Step 1: Assume (9.22). Then since Dαu ∈ Lp(Ω) for all |α| = k , the Sobolev-
Nirenberg-Gagliardo inequality implies
||Dβu||Lp∗
(Ω)≤ C||u||
Wk,p(Ω)if |β| = k − 1,
and so u ∈ W k−1,p∗(Ω). Similarly, we find u ∈ W k−2,p∗∗(Ω), where 1p∗∗ = 1
p∗ − 1n =
1p − 2
n . Continuing, we eventually discover after k steps that u ∈ W 0,q(Ω) = Lq(Ω),
for 1q = 1
p − kn . The corresponding continuity estimate follows from multiplying the
relevant estimates at each stage of the above argument.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
266Chapter 9: Function Spaces
Step 2: Assume now condition (9.23) holds, and np is not an integer. Then as above we see
u ∈ W k−l ,r (Ω), (9.25)
for
1
r=
1
p− l
n, (9.26)
provided lp < n. We choose the integer l so that
l <n
p< l + 1; (9.27)
that is, we set l =[np
]. Consequently (9.26) and (9.27) imply pn
n−pl > n. Hence (9.25)
and Morrey’s inequality imply that Dαu ∈ C0,1− nr (Ω) for all |α| < k − l − 1. Observe
also that 1− nr = 1− n
p + l =[np
]+ 1− n
p . Thus u ∈ Ck−[np
]−1,
[np
]+1− n
p (Ω), and the
stated estimate follows.
Step 3: Finally, suppose (9.26) holds, with np an integer. Set l =
[np
]−1 = n
p−1. Consequently,
we have as above u ∈ W k−l ,r (Ω) for r = pnn−pl = n. Hence the Sobolev-Nirenberg-
Gagliardo inequality shows Dαu ∈ Lq(Ω) for all n ≤ q < ∞ and all |α| ≤ k − l −1 = k −
[np
]. Therefore Morrey’s inequality further implies Dαu ∈ C0,1− n
q (Ω) for all
n < q < ∞ and all |α| ≤ k −[np
]− 1. Consequently u ∈ Ck−
[np
]−1,γ
(Ω) for each
0 < γ < 1. As before, the stated continuity estimate follows as well.
9.8 Difference Quotients
In this section we develop a tool that will enable us to determine the existence of derivatives
of certain solutions to various PDE. More succinctly, studies that pertain to this pursuit are
referred to collectively as regularity theory. We will go into this more in later chapters.
Recalling 1D calculus, we know that f is differentiable at x0 if
limx→x0
f (x)− f (x0)
x − x0= limh→0
f (x0 + h)− f (x0)
h
exists. In nD with f : Ω ⊂ Rn → R, we analogize to say that f is differentiable at x0 in the
direction of s if nD: f : Ω ⊂ Rn → R,
limh→0
∆khf (x) = limh→0
f (x + hes)− f (x)
h, ek = (0, . . . , 1, . . . , 0)
exists, where
es = (0, . . . , 1︸︷︷︸sth place
, . . . , 0);
i.e., to have the s partial derivative, limh→0 ∆khf (x) must exist.
The power of using difference quotients is conveniently summed up in the following
proposition.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.8 Difference Quotients267
Proposition 9.8.1 (): If 1 ≤ p < ∞, f ∈ W 1,p(Ω), Ω′ ⊂⊂ Ω, and h small enough with
h < dist(Ω′, ∂Ω), then
i.)
||∆khf ||Lp(Ω′) ≤ C||Dk f ||Lp(Ω), k = 1, . . . , n.
ii.) Moreover, if 1 < p <∞ and
||∆khf ||Lp(Ω′) ≤ K ∀h < dist(Ω′, ∂Ω),
then f ∈ W 1,ploc (Ω) and ||Dk f ||Lp(Ω′) ≤ K with ∆khf → Dk f strongly in Lp(Ω′).
Proof: First, consider f ∈ W 1,p(Ω) ∩ C1(Ω). We start off by calculating
|∆khf (x)| =
∣∣∣∣f (x + hek)− f (x)
h
∣∣∣∣
≤ 1
h
∫ h
0
|Dk f (x + tek)| dt
Holder≤ h−1/p
(∫ h
0
|Dk f (x + tek)|p dt) 1
p
.
Taking the pth power, we obtain
|∆khf (x)|p ≤ 1
h
(∫ h
0
|Dk f (x + tek)|p dt).
Next we integrate both sides over Ω′:
∫
Ω′|∆khf (x)|p dx ≤ 1
h
∫ h
0
∫
Ω′|Dk f (x + tek)|p dxdt
≤ 1
h
∫ h
0
∫
Ω′+tek|Dk f (x)|p dxdt
≤ 1
h
∫ h
0
∫
Ω
|Dk f (x)|p dxdt
=
∫
Ω
|Dk f |p dx ≤ K.
Now, we need to verify the last calculation for general f ; we do this by the standard argument
that W 1,p(Ω) ∩ C1(Ω) is dense in W 1,p(Ω). Indeed, approximate f ∈ W 1,p(Ω) by fn ∈W 1,p(Ω)∩C1(Ω) such that fn → f in W 1,p(Ω) and use the above estimate for fn to ascertain
∫
Ω′|∆khfn|p dx ≤
∫
Ω
|Dk fn|p dx↓ Lp(Ω) ↓ Lp(Ω)∫
Ω′|∆khf |p dx ≤
∫
Ω
|Dk f |p dx ∀f ∈ W 1,p(Ω),
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
268Chapter 9: Function Spaces
for h fixed. This proves i.).
Now suppose that the last conclusion holds for h < Dist(Ω′, ∂Ω); i.e. since f ∈ W 1,p(Ω) we
see that ∆khf is uniformly bounded in Lp(Ω′). Thus, we have established the stipulations of
part ii.). Next, we construct a sequence ∆khn f by considering hn 0. Taking the bound-
edness ∆khf in Lp(Ω′) we apply the Banach-Alaoglu theorem to establish the existence of a
weakly convergent subsequence of ∆khn f . Upon relabeling such a subsequence, we define
gk as the weak limit, i.e. ∆khn f gk in Lp(Ω′). Now, we claim that gk = Dk f on Ω′. Claim:
gk = Dk f on Ω′, i.e.,∫
Ω′f ·Dkφ dx = −
∫
Ω′gkφ dx ∀φ ∈ C∞c (Ω′).
To this end, we calculate∫
Ω′∆khf · φ dx =
∫
Ω′
f (x + hek)− f (x)
hφ(x) dx
=1
h
∫
Ω′f (x + hek)φ(x) dx − 1
h
∫
Ω′f (x)φ(x) dx
=1
h
∫
Ω′+hekf (x)φ(x − hek) dx − 1
h
∫
Ω′f (x)φ(x) dx
=
∫
Ω
f (x)
[−φ(x − hek)− φ(x)
−h
]dx
= −∫
Ω
f (x)(∆k−hφ(x)) dx
Now as Dkφ,∆k−hφ(x) ∈ C∞c (Ω), we can use a simple real-analysis argument to say that
∆k−hφ→ Dkφ uniformly in Ω′. Thus, we have shown
∫
Ω′gkφ dx = −
∫
Ω′f ·Dkφ dx ∀φ ∈ C∞c (Ω′),
proving the claim. Moreover, this last equality clearly indicate that gk ∈ Lp(Ω′), as f ∈W 1,p(Ω). Now, we just have to show that ∆khf → Dk f strongly in Lp(Ω′). As we did for
part i.), first consider f ∈ W 1,p(Ω) ∩ C1(Ω). Calculating we see
∫
Ω′|∆khf −Dk f | dx =
∫
Ω′
∣∣∣∣1
h
∫ h
0
(Dk f (x + tek)−Dk f (x)) dt
∣∣∣∣p
dx
=
∫
Ω′
1
h
∫ h
0
|Dk f (x + tek)−Dk f (x))|p dtdx
=1
h
∫ h
0
∫
Ω′|Dk f (x + tek)−Dk f (x))|p dtdx
Now, since f ∈ C1(Ω), the integrand on the RHS is strictly bounded, i.e. we can apply
Lebesgue’s dominated convergence theorem (to the inner integral) to state
limh→0
∫
Ω′|∆khf −Dk f | dx = 0.
For general f ∈ W 1,p(Ω′), we do as before by picking fn ∈ W 1,p(Ω) ∩ C1(Ω) such that
fn → f in W 1,p(Ω′) to conclude that the above limit still holds. Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.9 A Note about Banach Algebras269
9.9 A Note about Banach Algebras
The following theorem is sometimes useful in the manipulation of weak formulations of PDE.
Theorem 9.9.1 (): Let Ω ⊂ Rn be an open and bonded domain with smooth boundary. If
p > n, then the space W 1,p(Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p(Ω), then f · g ∈ W 1,p(Ω).
Proof: The Sobolev embeddings imply that
f , g ∈ C0,α(Ω) with α = 1− np> 0
and that the estimates
||f ||C0,α(Ω) ≤ C0||f ||W 1,p(Ω), ||g||C0,α(Ω) ≤ C0||g||W 1,p(Ω)
hold. For n ≥ 2 we have p′ < p and we may use the usual product rule to deduce that the
weak derivative of f g is given by f ·Dg +Df · g. Then
||D(f g)||pLp(Ω)
≤ C
∫
Ω
(|f ·Dg|p + |g ·Df |p) dx
≤ B(||f ||pL∞(Ω)
||Dg||pW 1,p(Ω)
+ ||g||pL∞(Ω)
||Df ||pW 1,p(Ω)
)
≤ 2BC1||f ||pW 1,p(Ω)||g||p
W 1,p(Ω).
The same arguments work for n = 1 and p ≥ 2. If p ∈ (1, 2), then we choose fε = φε ∗ fwhere φε is a standard mollifier. Suppose that Ω = (a, b). Then fε → f in W 1,1
loc (a, b) and
locally uniformly on (a, b) since f is continuous. It follows that
∫ b
a
fεgφ′ dx = −
∫ b
a
(Dg · fε + g ·Dfε)φdx.
As ε→ 0, we conclude∫ b
a
f gφ′ dx = −∫ b
a
(Dg · f + g ·Df )φdx.
Thus f g ∈ W 1,1(Ω) with D(f g) = Df · g + g · Df . The p-integrability of the derivative
follows from the embedding theorems as before.
9.10 Exercises
8.1: Let X = C0([−1, 1]) be the space of all continuous function on the interval [−1, 1], Y =
C1([−1, 1]) be the space of all continuously differentiable functions, and let Z = C10 ([−1, 1])
be the space of all function u ∈ Y with boundary values u(−1) = u(1) = 0.
a) Define
(u, v) =
∫ 1
−1
u(x)v(x) dx.
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2. Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
270Chapter 9: Function Spaces
b) Define
(u, v) =
∫ 1
−1
u′(x)v ′(x) dx.
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2. Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
8.2: Suppose that Ω is an open and bounded domain in Rn. Let k ∈ N, k ≥ 0, and γ ∈ (0, 1].
We define the norm || · ||k,γ in the following way. The γth-Holder seminorm on Ω is given by
[u]γ;Ω = supx,y∈Ω,x 6=y
|u(x)− u(y)||x − y |γ ,
and we define the maximum norm by
|u|∞;Ω = supx∈Ω|u(x)|.
The Holder space Ck,γ(Ω) consists of all functions u ∈ Ck(Ω) for which the norm
||u||k,γ;Ω =∑
|α|≤k||Dαu||∞;Ω +
∑
|α|=k[Dαu]γ;Ω
is finite. Prove that Ck,α(Ω) endowed with the norm || · ||k,γ;Ω is a Banach space.
Hint: You may prove the result for C0,γ([0, 1]) if you indicate what would change for the
general result.
8.3: Consider the shift operator T : l2 → l2 given by
(x)i = xi+1.
Prove the following assertion. It is true that
x− Tx = 0 ⇐⇒ x = 0,
however, the equation
x− Tx = y
does not have a solution for all y ∈ l2. This implies that T is not compact and the Fredholm’s
alternative cannot be applied. Give an example of a bounded sequence xj such that Txj does
not have a convergent subsequence.
8.4: The space l2 is a Hilbert space with respect to the scalar product
(x, y) =
∞∑
i=1
xiyi .
Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.10 Exercises271
Use general theorems to prove that the sequence of unit vectors in l2 given by (ei)j = δi jcontains a weakly convergent subsequence and characterize the weak limit.
8.5: Find an example of a sequence of integrable functions fi on the interval (0, 1) that
converge to an integrable function f a.e. such that the identity
limi→∞
∫ 1
0
fi(x) dx =
∫ 1
0
limi→∞
fi(x) dx
does not hold.
8.6: Let a ∈ C1([0, 1]), a ≥ 1. Show that the ODE
−(au′)′ + u = f
has a solution u ∈ C2([0, 1]) with u(0) = u(1) = 0 for all f ∈ C0([0, 1]).
Hint: Use the method of continuity and compare the equation with the problem −u′′ = f .
In order to prove the a-priori estimate derive first the estimate
sup |tut | ≤ sup |f |
where ut is the solution corresponding to the parameter t ∈ [0, 1].
8.7: Let λ ∈ R. Prove the following alternative: Either (i) the homogeneous equation
−u′′ − λu = 0
has a nontrivial solution u ∈ C([0, 1]) with u(0) = u(1) = 0 or (ii) for all f ∈ C0([0, 1]),
there exists a unique solution u ∈ C2([0, 1]) with u(0) = u(1) = 0 of the equation
−u′′ − λu = f .
Moreover, the mapping Rλ : f 7→ u is bounded as a mapping from C0([0, 1]) into C2([0, 1]).
Hint: Use Arzela-Ascoli.
8.8: Let 1 < p ≤ ∞ and α = 1 − 1p . Then there exists a constant C that depends
only on a, b, and p such that for all u ∈ C1([a, b]) and x0 ∈ [a, b] the inequality
||u||C0,α([a,b]) ≤ |u(x0)|+ C||u′||Lp([a,b])
holds.
8.9: Let X = L2(0, 1) and
M =
f ∈ X :
∫ 1
0
f (x) dx = 0
.
Show that M is a closed subspace of X. Find M⊥ with respect to the notion of orthogonality
given by the L2 scalar product
(·, ·) : X ×X → R, (u, v) =
∫ 1
0
u(x)v(x) dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
272Chapter 9: Function Spaces
According to the projection theorem, every f ∈ X can be written as f = f1 + f2 with f1 ∈ Mand f2 ∈ M⊥. Characterize f1 and f2.
8.10: Suppose that pi ∈ (1,∞) for i = 1, . . . , n with
1
p1+ · · ·+ 1
pn= 1.
Prove the following generalization of Holder’s inequality: if fi ∈ Lpi (Ω) then
f =
n∏
i=1
fi ∈ L1(Ω) and
∣∣∣∣∣
∣∣∣∣∣n∏
i=1
fi
∣∣∣∣∣
∣∣∣∣∣L1(Ω)
≤n∏
i=1
||fi ||Lpi (Ω).
8.11: [Gilbarg-Trudinger, Exercise 7.1] Let Ω be a bounded domain in Rn. If u is a measurable
function on Ω such that |u|p ∈ L1(Ω) for some p ∈ R, we define
Φp(u) =
(1
|Ω|
∫
Ω
|u|p dx) 1
p
.
Show that
limp→∞
Φp(u) = supΩ|u|.
8.12: Suppose that the estimate
||u − uΩ||Lp(Ω) ≤ CΩ||Du||Lp(Ω)
holds for Ω = B(0, 1) ⊂ Rn with a constant C1 > 0 for all u ∈ W 1,p0 (B(0, 1)). Here uΩ
denotes the mean value of u on Ω, that is
uΩ =1
|Ω|
∫
Ω
u(z) dz.
Find the corresponding estimate for Ω = B(x0, R), R > 0, using a suitable change of coor-
dinates. What is CB(x0,R) in terms of C1 and R?
8.13: [Qualifying exam 08/99] Let f ∈ L1(0, 1) and suppose that∫ 1
0
f φ′ dx = 0 ∀φ ∈ C∞0 (0, 1).
Show that f is constant.
Hint: Use convolution, i.e., show the assertion first for fε = ρε ∗ f .
8.14: Let g ∈ L1(a, b) and define f by
f (x) =
∫ x
a
g(y) dy.
Prove that f ∈ W 1,1(a, b) with Df = g.
Hint: Use the fundamental theorem of Calculus for an approximation gk of g. Show that
the corresponding sequence fk is a Cauchy sequence in L1(a, b).
8.15: [Evans 5.10 #8,9] Use integration by parts to prove the following interpolation in-
equalities.Gregory T. von Nessi
Versio
n::3
11
02
01
1
9.10 Exercises273
a) If u ∈ H2(Ω) ∩H10(Ω), then
∫
Ω
|Du|2 dx ≤ C(∫
Ω
u2 dx
) 12(∫
Ω
|D2u|2 dx) 1
2
b) If u ∈ W 2,p(Ω) ∩W 1,p0 (Ω) with p ∈ [2,∞), then
∫
Ω
|Du|p dx ≤ C(∫
Ω
|u|p dx) 1
2(∫
Ω
|D2u|p dx) 1
2
.
Hint: Assertion a) is a special case of b), but it might be useful to do a) first. For simplicity,
you may assume that u ∈ C∞c (Ω). Also note that
∫
Ω
|Du|p dx =
n∑
i=1
∫
Ω
|Diu|2|Du|p−2 dx.
8.16: [Qualifying exam 08/01] Suppose that u ∈ H1(R), and for simplicity suppose that u
is continuously differentiable. Prove that
∞∑
n=−∞|u(n)|2 <∞.
8.17: Suppose that n ≥ 2, that Ω = B(0, 1), and that x0 ∈ Ω. Let 1 ≤ p <∞. Show that
u ∈ C1(Ω\x0) belongs to W 1,p(Ω) if u and its classical derivative ∇u satisfy the following
inequality,
∫
Ω\x0(|u|p + |∇u|p) dx <∞.
Note that the examples from the chapter show that the corresponding result does not hold
for n = 1. This is related to the question of how big a set in Rn has to be in order to be
“seen” by Sobolev functions.
Hint: Show the result first under the additional assumption that u ∈ L∞(Ω) and then
consider uk = φk(u) where φk is a smooth function which is close to
ψk(s) =
k if s ∈ (k,∞),
s if s ∈ [−k, k ],
−k if s ∈ (−∞,−k).
You could choose φk to be the convolution of ψk with a fixed kernel.
8.18: Give an example of an open set Ω ⊂ Rn and a function u ∈ W 1,∞(Ω), such that
u is not Lipschitz continuous on Ω.
Hint: Take Ω to be the open unit disk in R2, with a slit removed.
8.19: Let Ω = B(
0, 12
)⊂ Rn, n ≥ 2. Show that log log 1
|x | ∈ W 1,n(Ω).
8.20: [Qualifying exam 01/00]
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
274Chapter 9: Function Spaces
a) Show that the closed unit ball in the Hilbert space H = L2([0, 1]) is not compact.
b) Show that g : [0, 1]→ R is a continuous function and define the operator T : H → H
by
(Tu)(x) = g(x)u(x) for x ∈ [0, 1].
Prove that T is a compact operator if and only if the function g is identically zero.
8.21: [Qualifying exam 08/00] Let Ω = B(0, 1) be the unit ball in R3. For any function
u : Ω → R, define the trace Tu by restricting u to ∂Ω, i.e., Tu = u∣∣∂Ω
. Show that
T : L2(Ω)→ L2(∂Ω) is not bounded.
8.22: [Qualifying exam 01/00] Let H = H1([0, 1]) and let Tu = u(1).
a) Explain precisely how T is defined for all u ∈ H, and show that T is a bounded linear
functional on H.
b) Let (·, ·)H denote the standard inner product inH. By the Riesz representation theorem,
there exists a unique v ∈ H such that Tu = (u, v)H for all u ∈ H. Find v .
8.23: Let Ω = B(0, 1) ⊂ Rn with n ≥ 2, α ∈ R, and f (x) = |x |α. For fixed α, determine
for which values of p and q the function f belongs to Lp(Ω) and W 1,q(Ω), respectively. Is
there a relation between the values?
8.24: Let Ω ⊂ Rn be an open and bounded domain with smooth boundary. Prove that
for p > n, the space W 1,p(Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p(Ω), then f g ∈ W 1,p(Ω).
8.25: [Evans 5.10 #11] Show by example that if we have ||Dhu||L1(V ) ≤ C for all 0 <
|h| < 12dist (V, ∂U), it does not necessarily follow that u ∈ W 1,1(Ω).
8.26: [Qualifying exam 01/01] Let A be a compact and self-adjoint operator in a Hilbert
space H. Let ukk∈N be an orthonormal basis of H consisting of eigenfunctions of A with
associated eigenvalues λkk∈N. Prove that if λ 6= 0 and λ is not an eigenvalue of A, then
A− λI has a bounded inverse and
||(A− λI)−1|| ≤ 1
infk∈N |λk − λ|<∞.
8.27: [Qualifying exam 01/03] Let Ω be a bounded domain in Rn with smooth boundary.
a) For f ∈ L2(Ω) show that there exists a uf ∈ H10(Ω) such that
||f ||2H−1(Ω) = ||uf ||2H10(Ω)
= (f , uf )L2(Ω).
b) Show that L2(Ω) ⊂⊂−→ H−1(Ω). You may use the fact that H10(Ω) ⊂⊂−→ L2(Ω).
8.28: [Qualifying exam 08/02] Let Ω be a bounded, connected and open set in Rn with
smooth boundary. Let V be a closed subspace of H1(Ω) that does not contain nonzero
constant functions. Using H1(Ω) ⊂⊂−→ L2(Ω), show that there exists a constant C < ∞,
independent of u, such that for u ∈ V ,∫
Ω
|u|2 dx ≤ C∫
Ω
|Du|2 dx.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
276Chapter 10: Linear Elliptic PDE
10 Linear Elliptic PDE
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
10.1 Existence of Weak Solutions for Laplace’s Equation 282
10.2 Existence for General Elliptic Equations of 2nd Order 286
10.3 Elliptic Regularity I: Sobolev Estimates 291
10.4 Elliptic Regularity II: Schauder Estimates 303
10.5 Estimates for Higher Derivatives 318
10.6 Eigenfunction Expansions for Elliptic Operators 320
10.7 Applications to Parabolic Equations of 2nd Order 324
10.8 Neumann Boundary Conditions 330
10.9 Semigroup Theory 334
10.10Applications to 2nd Order Parabolic PDE 342
10.11Exercises 344
10.1 Existence of Weak Solutions for Laplace’s Equation
General assumption Ω ⊂ Rn open and bounded
Want to solve:−∆u = f in Ω
u = 0 in ∂Ω(10.1)
multiply by v ∈ W 1,20 (Ω), integrate by parts:
∫
Ω
Du ·Dv dx =
∫
Ω
f v dx
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.1 Existence of Weak Solutions for Laplace’s Equation277
Definition 10.1.1: Let f ∈ L2(Ω). A function u ∈ W 1,20 (Ω) is called a weak solution of
(10.1) if
∫
Ω
Du ·Dv dx =
∫
Ω
f v dx ∀v ∈ W 1,20 (Ω)
Theorem 10.1.2 (): The equation (10.1) has a unique weak solution in W 1,20 (Ω) and we
have ||u||W 1,2(Ω) ≤ C||f ||L2(Ω)
Notation: u = (−∆)−1f implicitly understood u = 0 on ∂Ω
Proof: H10 = W 1,2
0 (Ω) is a Hilbert space, F : H10 → R,
F (v) =
∫
Ω
f v dx
Then F is linear and bounded
|F (v)| =
∣∣∣∣∫
Ω
f v dx
∣∣∣∣
≤∫
Ω
|f v | dxHolder≤ ||f ||L2(Ω)||v ||L2(Ω)
≤ ||f ||L2(Ω)||v ||W 1,2(Ω)
Define B : H ×H → R, B(u, v) =∫
ΩDu ·Dv dx . B is bilinear. B is also bounded:
|B(u, v)| ≤ ||Du||L2(Ω)||Dv ||L2(Ω) ≤ ||u||W 1,2(Ω)||v ||W 1,2(Ω)
We can thus use Lax-Milgram if B is also coercive, i.e.,
B(u, u) ≥ ν||u||2W 1,2(Ω), B(u, u) =
∫
Ω
|Du|2 dx
So, to show this we use Poincare:
||u||2L2(Ω) ≤ Cp||Du||2L2(Ω)
⇐⇒ ||u||2W 1,2(Ω) = ||u||2L2(Ω) + ||Du||2L2(Ω) ≤ (C2p + 1)||Du||2L2(Ω)
=⇒ ||Du||2L2(Ω) ≥1
C2p + 1
||u||2W 1,2(Ω)
Thus,
B(u, u) ≥ 1
C2p + 1
||u||2W 1,2(Ω), i.e. B is coercive
Now Lax-Milgram says ∃u ∈ H10(Ω) = W 1,2
0 (Ω) such that
B(u, v) = F (v) ∀v ∈ H10(Ω) ⇐⇒
∫
Ω
Du ·Dv dx =
∫
Ω
f v dx ∀v ∈ H10(Ω)
i.e. u is a unique weak solution. The estimate ||u||W 1,2(Ω) ≤ C||f ||L2(Ω), follows from the
estimate in Lax-Milgram.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
278Chapter 10: Linear Elliptic PDE
Proposition 10.1.3 (): The operator
(−∆)−1 : L2(Ω)→ L2(Ω) is compact
Proof: Since ||u||W 1,2(Ω) ≤ C||f ||L2(Ω) we know that (−∆)−1 : L2(Ω) → W 1,20 (Ω) is
bounded. From compact embedding, we thus have
(−∆)−1 : L2(Ω) bounded→→W 1,2(Ω) ⊂⊂−→ L2(Ω)
(−∆)−1 maps bounded sets in L2(Ω) into precompact sets in L2(Ω), i.e., is compact.
Theorem 10.1.4 (): Either
i.) ∀f ∈ L2(Ω) the equations−∆u = λu + f in Ω
u = 0 on ∂Ω(10.2)
has a unique weak solution, or
ii.) there exists a nontrivial solution of−∆u = λu in Ω
u = 0 on ∂Ω(10.3)
iii.) If ii.) holds, then the space N of solutions of (10.3) is finite dimensional
iv.) the equation (10.2) has a solution ⇐⇒
(f , v)L2(Ω) = 0 ∀v ∈ N
Remark: In general iv.) is true if f ⊥ N∗, where N∗ is the null-space of the dual operator.
Proof: Almost all assertions follow from Fredholm’s-alternative in Hilbert Spaces.
Suppose that K = K∗(10.2) is equivalent to
u = (−∆)−1u + (−∆)−1f
take K = (−∆)−1 compact. f = (−∆)−1f . Without loss of generality, λ 6= 0, (I−λK)u = f .
i.) has a solution if f is perpendicular to the null-space of I − λK, i.e. (f , v) = 0 ∀vwith v − λKv = 0.
=⇒ (Kf , v) = 0 ⇐⇒ (f , K∗v) = 0 ⇐⇒ 1λ(f , v) = 0
=⇒ (f , v) = 0, ∀v ∈ N since λ 6= 0.
Finally, we just need to prove K is self-adjoint, i.e., K = K∗, f ∈ L2(Ω), h = Kf
⇐⇒ h is a weak solution of −∆h = f , f ∈ W 1,20 (Ω)
⇐⇒∫
Ω
Dh ·Dv dx =
∫
Ω
f v dx ∀v ∈ W 1,20 (Ω)
⇐⇒∫
Ω
D(Kf ) ·Dv dx =
∫
Ω
f v dx
︸ ︷︷ ︸v=Kg (*)
∀v ∈ W 1,20 (Ω)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.1 Existence of Weak Solutions for Laplace’s Equation279
So with this definition of g, we see
(K∗f , g) = (f , Kg)
(by (*)) =(D(Kf ), D(Kg)
)
(by symmetry) =(D(Kg), D(Kf )
)
= (g,Kf )
= (Kf , g)
=⇒ this holds ∀g ∈ L2
=⇒ K∗f = Kf ∀f=⇒ K∗ = K.
———
−∆u = f , L = −∆, f F ∈ H−1(Ω)
B(u, v) = L(u, v) =
∫
Ω
Du ·Dv dx
Weak solution:
L(u, v) =
∫
Ω
f v dx = 〈f , v〉 ∀v ∈ H10(Ω)
Existence and uniqueness: Lax-Milgram
More generally, we can solve
−∆u = g +
n∑
i=1
Di fi
︸ ︷︷ ︸∈H−1(Ω)
, g, fi ∈ L2(Ω)
Weak solution:
L(u, v)
⟨g +
n∑
i=1
Di fi , v
⟩=
∫
Ω
(gv −
n∑
i=1
figi
)dx
Notation: Summation convention. Frequently drop the summation symbol, sum over re-
peated indexes.
General boundary conditions: u0 ∈ W 1,2(Ω), want to solve−∆u = 0 in Ω
u = u0 on ∂Ω
Define u = u − u0 ∈ W 1,20 (Ω), and find equation for u.
∫
Ω
Du ·Dv dx = 0 ∀v ∈ H10(Ω)
⇐⇒∫
Ω
((Du −Du0) +Du0) ·Dv dx = 0
⇐⇒∫
Ω
Du ·Dv dx = −∫
Ω
Du0 ·Dv dx
Equation for u with general H−1(Ω) on the RHS.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
280Chapter 10: Linear Elliptic PDE
10.2 Existence for General Elliptic Equations of 2nd Order
Lu = −Di(ai jDju + biu) + ciDiu + du
with ai j , bi , ci , d ∈ L∞(Ω). General elliptic operator in divergence form, −div(A · Du) is
natural to work with in the framework of weak solutions..
Non-divergence form
Lu = ai jDiDju + biDiu + cu
If coefficients smooth, then non-divergence is equivalent to divergence form.
Corresponding bilinear form (multiply by v , integrate by parts)
L(u, v) =
∫
Ω
(ai jDju ·Div + biuDiv + cDiu · v + duv
)dx
want to solve Lu = g +Di fi ∈ H−1(Ω), i.e.,
L(u, v) =
∫
Ω
(gv + fiDiv) dx ∀v ∈ H10(Ω) (10.4)
Definition 10.2.1: u ∈ H10(Ω) is a weak solution of Lu = g +Di fi if (10.4) holds.
Definition 10.2.2: L is strictly elliptic if ai jξiξj ≥ λ|ξ|2, ∀ξ ∈ Rn, λ > 0.
Definition 10.2.3: Let L strictly elliptic, σ > 0 sufficiently large and let Lσu = Lu + σu,
thenLσu = g +Di fi in Ω
u = 0 on ∂Ω
and ||u||W 1,2(Ω) ≤ C(σ)(||g||L2(Ω) + ||f ||L2(Ω)
), f = (f1, . . . , fn)
Proof: Lax-Milgram in H10(Ω),
F (v) =
∫
Ω
(gv − fiDiv) dx ≤(||g||L2(Ω) + ||f ||L2(Ω)
)||v ||W 1,2(Ω)
Bilinear form:
L(u, v) =
∫
Ω
(ai jDju ·Div + biuDiv + ciDiu · v + duv) dx
L is bounded since
L(u, v)Holder≤ C(ai j , bi , ci , d)||u||W 1,2(Ω)||v ||W 1,2(Ω)
Crucial: check whether L is coercive
L(u, u) =
∫
Ω
ai jDju ·Diu︸ ︷︷ ︸
ellipticity: ≥λ|Du|2
+ biuDiu · u + du2
︸ ︷︷ ︸≤C(bi ,ci ,a)||u||L2(Ω)||u||W1,2(Ω)
dx
≥ λ
∫
Ω
|Du|2 dx − C||u||L2(Ω)||u||W 1,2(Ω)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.2 Existence for General Elliptic Equations of 2nd Order281
——
Poincare: ||u||2L2(Ω) ≤ C2p ||Du||2L2(Ω)
=⇒ ||u||2L2(Ω) + ||Du||2L2(Ω) ≤ (C2p + 1)||Du||2L2(Ω)
=⇒ ||Du||2L2(Ω) ≥1
C2p + 1
||u||2W 1,2(Ω)
——
Poincare≥ λ
C2p + 1
||u||2W 1,2(Ω) − C||u||L2(Ω)||u||W 1,2(Ω)
Young’s≥ λ′||u||2W 1,2(Ω) −λ′
2||u||2W 1,2(Ω) − C||u||2L2(Ω)
=λ′
2||u||2W 1,2(Ω) − C||u||2L2(Ω)
In general, L is not coercive, but if we take Lσ = Lu + σu,
Lσ(u, v) = L(u, v) + σ
∫
Ω
uv dx, then
Lσ(u, u) ≥ λ′
2||u||2W 1,2(Ω) + σ||u||2L2(Ω) − σ||u||2L2(Ω) − C||u||2L2(Ω)
and Lσ is coercive as soon as σ > C(bi , ci , d).
Question: Under what conditions can we solve
Lu = F ∀F ∈ H−1(Ω)?
Answer: Applications of Fredholm’s alternative.
Lu = −Di(ai jDju + biu) + ciDiu + du
Define:
〈Lu, v〉 = L(u, v)
=
∫
Ω
(ai jDju ·Div + biuDiv + ciDiu · v + duv) dx
Formally adjoint operator L∗
〈L∗u, v〉 = L∗(u, v) = L(v , u)
=
∫
Ω
(ai jDjv ·Diu︸ ︷︷ ︸aj iDiv ·Dju
+bivDiu + ciDiv · u + duv) dx
=
∫
Ω
(−Di(aj iDiu + Ciu) + BiDiu + du)v dx
= 〈L∗u, v〉, i.e.
L∗ = −Di(aj iDju + ciu) + biDiu + du
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
282Chapter 10: Linear Elliptic PDE
10.2.1 Application of the Fredholm Alternative
Theorem 10.2.4 (Fredholm’s Alternative): L strictly elliptic. Then either
i.) ∀g, fi ∈ L2(Ω) the equation
Lu = g +Di fi in Ω
u = 0 on ∂Ω
has a unique solution, or
ii.) there exists a non-trivial, weak solution of the homogeneous equation
Lu = 0 in Ω
u = 0 on ∂Ω
iii.) If ii.) holds, then Lu = g +Di fi = F has a solution ⇐⇒
〈F, v〉 =
∫
Ω
(gv − fiDiv) dx = 0
∀v ∈ N(L∗), i.e. for all weak solutions of the adjoint equation L∗v = 0.
Proof: Setting up Fredholm
Lu = F ∈ H−1(Ω)
⇐⇒ Lσu − σu = F,
σ large enough such that Lσu = F has a unique solution ∀F .
Formally:
I1 : H10(Ω) ⊂−→ L2, I0 : L2(Ω) ⊂−→ H−1(Ω)
So, I2 = I0I1 : H10(Ω)→ H−1(Ω). Interpret our equation as Lσu − I2u = F is H−1(Ω).
——
(I0u)(v) =
∫
Ω
uv dx I0u bounded and linear
——
u − σL−1σ I2u = L−1
σ F in H10(Ω) (10.5)
One option: use Fredholm for this equation (done in Gilbarg and Trudinger). We will actually
shift into L2(Ω) first.
Interpret this as an equation in L2(Ω).
I1u − σI1L−1σ I2u = I1L
−1σ F
⇐⇒ (Id − σI1L−1σ I0)I1u = I1L
−1σ F
⇐⇒ (Id − σI1L−1σ I0)u = I1L
−1σ F in L2(Ω) (10.6)
where u = I1u ∈ L2(Ω).
Remark: If u is a solution of (10.6), then
u = L−1σ (σI0u + F ) (10.7)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.2 Existence for General Elliptic Equations of 2nd Order283
is a solution of (10.5).
Now we come back to the Fredholm’s alternative:
K = σ I1︸︷︷︸cmpct
·L−1σ I0︸ ︷︷ ︸
bnded
: L2(Ω)→ L2(Ω)
=⇒ K is compact.
=⇒ either (10.6) has a unique solution for all RHS in L2(Ω) or 2., the homogeneous equation
has non-trivial solutions.
• What i.) means:
Take RHS f = I1L−1σ F
=⇒ ∃! solution u ∈ L2(Ω)
=⇒ ∃! u of (10.5) by (10.7), that is
u − σL−1σ I2u = L−1
0 F
⇐⇒ Lσu − σI2u = F ⇐⇒ Lu = F
• What ii.) means:
(Id −K)u = 0 which implies the this a non-trivial weak solution of
u − σL−1σ I2u = 0
⇐⇒ Lσu − σI2u = F ⇐⇒ Lu = 0
Proof of iii.): K∗ = σI1(L∗σ)−1I0
By definition (also see below aside): (K∗g, h)L2 = (g,Kh)
——
Lu = f ⇐⇒ L(u, v) = (f , v) ∀v , Lσu = f ⇐⇒ Lσ(u, v) = (f , v) = Lσ(L−1σ f , v)
L∗u = f ⇐⇒ L∗(u, v) = (f , v) ∀v , L∗σu = f ⇐⇒ L∗σ(u, v) = (f , v) = Lσ((L∗σ)−1f , v)
∴ L∗σ(v , u) = Lσ(u, v)
——
(dual eqn v = Kh) = L∗σ((L∗σ)−1g,Kh)
= Lσ(Kh, (L∗σ)−1g)
(definition of K) = σLσ((L−1σ )h︸ ︷︷ ︸
H10(Ω)
, (L∗σ)−1g)
So,
original equation = σ(h, (L∗σ)−1g)
= σ((L∗σ)−1g, h) =⇒ ((K∗ − σ(L∗σ)−1)g︸ ︷︷ ︸=0
, h) = 0 ∀h ∈ L2,∀g ∈ L2
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
284Chapter 10: Linear Elliptic PDE
Solvability of Original Equation
Lu = F ∈ H−1 ⇐⇒ (I −K)u = I1L−1σ F
Fredholm in L2: If you don’t have a unique solution, then we can solve 2. if the RHS is
perpendicular to the null-space of the adjoint operator I −K∗, i.e.,
(I1L−1σ F, v) = 0, ∀v ∈ N(I −K∗)
v ∈ N(I −K∗) ⇐⇒ (I − σ(L∗σ)−1)v = 0 ⇐⇒ L∗σv − σv = 0
⇐⇒ L∗v = 0, i.e. v ∈ N(L∗)
Solvability condition ⇐⇒ (I1L−1σ F, v) = 0, ∀v ∈ N(L∗). Multiplying by σ
⇐⇒ (I1L−1σ F, σv) = 0 = (I1L
−1σ F, L∗σv) =
〈L−1σ F, L∗σv〉
⇐⇒ 〈L−1σ F, L∗σv〉 = 0 ⇐⇒ L∗σ(v , L−1
σ F ) = 0
⇐⇒ Lσ(L−1σ F, v) = 〈F, v〉 = 0 ∀v ∈ N(L∗)
Remark: ∆u = f in the space of Periodic functions W 1,2per . Obviously, u ≡constant is periodic
with ∆u = 0. Solvability condition comes from Fredholm’s alternative, it turns out that we
need∫
Ω f dx = 0.
10.2.2 Spectrum of Elliptic Operators
Theorem 10.2.5 (Spectrum of elliptic operators): L strictly elliptic, then there exists an
at most countable set Σ ⊂ R such that the following holds
1. if λ ∈ Σ, thenLu = λu + g +Di fi in Ω
u = 0 on ∂Ω
2. if λ ∈ Σ, then the eigenspace N(λ) = u : Lu = λu is non-trivial and has finite
dimension.
3. If Σ is infinite, then Σ = λk , k ∈ N, and λk →∞ as k →∞.
Remark: In 1D, −u′′ = λu on [0, π] u(0) = u(π) = 0 uk(x) = sin(kx), i.e. λk = k2 →∞Proof: Essentially a consequence of the spectral Theorem of compact operators 7.10
Fredholm: uniqueness fails ⇐⇒ Lu = λu has a non-trivial solution. Lu = λu ⇐⇒Lu + σu = (λ+ σ)u ⇐⇒ Lσu = (λ+ σ)u.
u = (λ + σ)L−1σ = λ+σ
σ Ku, K the compact operator as before, ⇐⇒ Ku = σλ+σu =⇒
spectrum is at most countable, eigenspace finite dimension, zero only possible accumulation
point =⇒ λk →∞ if Σ not finite.
General Elliptic Equation: Lu = f
Lu = −Di(ui jDju + biDiu) + ciDiu + du
lL(u, v) = 〈Lu, v〉, existence works, L(u, u) ≥ λ||u||2W 1,2(Ω)
e.g. Lσu = Lu + σu =⇒ ∃ of solutions, Fredholm.
If e.g., bi , ci , d = 0, then σ = 0 will work.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates285
10.3 Elliptic Regularity I: Sobolev Estimates
Elliptic regularity is one of the most challenging aspects of theoretical PDE. Regularity is
basically the studies “how good” a weak solution is; given that weak formulations of PDE
are not unique by any means, regularity tends to be very diverse in terms of the type of
calculations it requires.
The next two sections are devoted to the “classical” regularity results pertaining to linear
elliptic PDE. “Classical” is quoted because we will actually be employing more modern meth-
ods pertaining to Companato Spaces for derivation of Schauder estimates in the next section.
Aside: One will notice that the coming expositions focus heavily on Holder estimates as
opposed to approximations in Sobolev Spaces. Philosophically speaking, Sobolev Spaces is
the tool to use for existence and are hence the starting assumption in many regularity argu-
ments; but the ideal goal of regularity is to prove that a weak solution is indeed a classical
one. Thus, in regularity we usually are aiming for continuous derivatives, i.e. solutions in
Holder Spaces (at least).
To get the ball moving, let us consider a brief example. This will hopefully help convince
the reader that regularity is something that really does need to be proven as opposed to a
pedantic technicality that is always true.
Example 9: Consider the following problem in R:
−u′′ = f in (0, 1)
u(0) = 0.
Using integration by parts to attain a weak formulation of our solution we see that
u(x) =
∫ x
0
∫ t
0
f (s) ds dt
will solve the original equation, but the harder question of regularity u persist. Now if
f ∈ L2([0, 1]) it can be shown that u ∈ H2([0, 1]).
In higher dimensions, if u is a weak solution of −∆u = f , then f ∈ L2(Ω) usually im-
plies u ∈ H2loc(Ω) (provided ∂Ω has decent geometry, etc.). On the other hand, if f ∈ C0(Ω)
is is generally not the case that u ∈ C2(Ω)! There is hope though; we will prove (along with
the previous statements) that if f ∈ C0,α(Ω) (α > 0), then u ∈ C2(Ω) in general.
The next proposition concludes in one of the most important inequalities in regularity
studies of elliptic PDE.
10.3.1 Caccioppoli Inequalities
In this first subsection, we will develop the essential Caccioppoli inequalities. Even though
these inequalities ultimately provide a means of bounding first derivatives of solutions of
elliptic equations, the conditions under which the inequality itself holds vary. Thus, we first
consider the most intuitive example of the Laplace’s equation; following this, a more general
situation will be presented.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
286Chapter 10: Linear Elliptic PDE
Proposition 10.3.1 (Caccioppoli’s inequality for the −∆u = 0): If u ∈ H1(B(x0, R)) is a
weak solution of −∆u = 0, i.e.∫
Ω
Du ·Dφdx = 0 ∀φ ∈ H10(Ω),
then∫
B(x0,r)
|Du|2 dx ≤ C
(R − r)2
∫
B(x0,R)\B(x0,r)
|u − λ|2 dx
for all λ ∈ R and 0 < r < R.
Proof: The proof of this proposition is based on the idea of picking a “good” test func-
tion in H10(Ω) to manipulate our weak formulation with.
With the purpose of finding such a function, let us consider a function η such that
i.) η ∈ C∞c (B(x0, R)),
ii.) 0 ≤ η ≤ 1,
iii.) η ≡ 1 on B(x0, r), and
iv.) |Dη| ≤ C
R − r .
4 G& η ∈ C∞c (B(x0, R)) U
474 G& 0 ≤ η ≤ 1 U47474 G& η ≡ 1
D 6B(x0, r) U /698
4 G& |Dη| ≤ C
R− r G
r
R
x0 −R x0 − r x0 x0 + r x0 +R
4 3?>=*21 G D ,=*-6 ,C4 / @ A D >C: DA
η476
R2
** 3?>=* 21 G A D >T/ 4E<C3O/ @ 4 /,C4ED 6 DAI/ D ,=*-6 ,C4 / @ η 476
R2 G
*+* , U )* λ ∈ R/698J8?*969*
φ = η2(u− λ) ∈W 1,20 (B(x0, R)) G / @ 1-3 @ /,C476)
Dφ = 2ηDη(u − λ) + η2Du
/698 @ 3) 476) 476 ,=D,CN9* P#*;/QA D >C:S3 @ /,C4ED 6 U P*S/ <1*->C,/476
0 =
∫
B(x0,R)Du ·Dφ dx =
∫
B(x0,R)2ηDη ·Du(u− λ) + η2|Du|2 dx.
Figure 10.1: A potential form of η in R2
See figure (10.1) for a visualization of a potential η in R2.
Next, fix λ ∈ R and define φ = η2(u − λ) ∈ W 1,20 (B(x0, R)). Calculating
Dφ = 2ηDη(u − λ) + η2Du
and plugging into the weak formulation, we ascertain
0 =
∫
B(x0,R)
Du ·Dφ dx =
∫
B(x0,R)
2ηDη ·Du(u − λ) + η2|Du|2 dx.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates287
Algebraic manipulation of the above equality shows that∫
B(x0,R)
η2|Du|2 dx
= −∫
B(x0,R)
2ηDη ·Du(u − λ) dx
≤∫
B(x0,R)
2η|Dη| · |Du| · |u − λ| dx
Holder≤ 2
(∫
B(x0,R)
η2|Du|2 dx) 1
2(∫
B(x0,R)
|Dη|2|u − λ|2 dx) 1
2
.
Now we divide through by the first integral on the RHS and square the result to find that∫
B(x0,R)
η2|Du|2 dx ≤ 4
∫
B(x0,R\B(x0,r)
|Dη|2︸ ︷︷ ︸C
(R−r)2
|u − λ|2 dx.
This leads immediately to the conclusion that∫
B(x0,r)
|Du|2 dx ≤ C
(R − r)2
∫
B(x0,R)\B(x0,r)
|u − λ|2 dx
Before moving onto Caccioppoli’s inequalities for elliptic systems, we have the following
special cases of proposition 10.3.1:
Case 1: r = R2 , λ = 0.
∫
B(x0,R2 )|Du|2 dx ≤ C
R2
∫
B(x0,R)\B(x0,R/2)
|u|2 dx
≤ C
R2
∫
B(x0,R)
|u|2 dx
Case 2: r = R2 , λ = u
B(x0,R).
∫
B(x0,R2 )|Du|2 dx ≤ C
R2
∫
B(x0,R)
|u − uB(x0,R)
|2 dx
Case 3: r = R2 , λ = u
B(x0,R)\B(x0,R/2).
∫
B(x0,R2 )|Du|2 dx
≤ C
R2
∫
B(x0,R)\B(x0,R2 )
∣∣∣u − uB(x0,R)\B(x0,R/2)
∣∣∣2dx
Poincare≤ C
∫
B(x0,R)\B(x0,R/2)
|Du|2 dx.
We can “plug the hole” in the area of integration on the RHS by adding C times the
LHS of the above to both sides of the inequality to get
(1 + C)
∫
B(x0,R2 )|Du|2 dx ≤ C
∫
B(x0,R)
|Du|2 dx,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
288Chapter 10: Linear Elliptic PDE
i.e.∫
B(x0,R2 )|Du|2 dx ≤ C
C + 1
∫
B(x0,R)
|Du|2 dx
= Θ
∫
B(x0,R)
|Du|2 dx, (10.8)
where 0 < Θ < 1 is constant!
Note: The general Caccioppoli inequalities do not requires the integrations to be over con-
centric balls. It is easy to see that as long as BR/2(y0) ⊂ BR(x0), the inequality still holds
(just appropriately modify η in the proofs). This generalization will be assumed in some of
the more heuristic arguments to come.
A direct consequence of this last case is
Corollary 10.3.2 (Lioville’s Theorem): If u ∈ H1(Rn) is a weak solution of −∆u = 0, then
u is constant.
Proof: Take limit R→∞ of both sides of (10.8) to see∫
Rn|Du|2 dx ≤ Θ
∫
Rn|Du|2 dx,
which implies Du is constant a.e. 0 < Θ < 1; but as u ∈ H1(Rn), we conclude that Du
constant everywhere in R.
Now we move on to consider elliptic systems of equations (which may or may not be
linear).
Proposition 10.3.3 (Caccioppoli’s inequality): Consider the elliptic system
−Dα(Aαβij (x)Dβu
j)
= g, for i = 1, . . . , m, (10.9)
where g ∈ H−1(Ω). Suppose we choose fi , fαi ∈ L2(Ω) such that g = fi +Dαf
αi and consider
a weak solution u ∈ H1loc(Ω) of (10.9), e.g.
∫
Ω
Aαβij DβujDαφ
i dx =
∫
Ω
(fiφi + f αi Dαφ
i) dx. for ∀φ ∈ H10(Ω).
If we ascertain that
i.) Aαβij ∈ L∞(Ω)
ii.) Aαβij ξiαξjβ ≥ ν|ξ|2, ν > 0
or
i.) Aαβij = constant
ii.) Aαβij ξiξjηαηβ ≥ ν|ξ|2|η|2, ν > 0 ∀ξ ∈ Rm and η ∈ Rn.
(This is called the Legendre-Hadamard condition)
or
i.) Aαβij ∈ C0(Ω)
ii.) Legendre-Hadamard condition holds
iii.) R sufficiently small
,
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates289
then the following Caccioppoli inequality is true
∫
B(x0,R/2)
|Du|2 dx ≤ C(n)
1
R2
∫
B(x0,R)
|u − λ|2 dx + R2
∫
B(x0,R)
∑
i
f 2i dx
+
∫
B(x0,R)
∑
i ,α
(f αi )2 dx
for all λ ∈ R.
Proof: 1 Here we will prove the third case; it’s not hard to adapt the following estimates
to prove cases i.) and ii.). From our weak formulation we derive
∫
BR(x0)
Aαβij (x0)DαuiDβφ
j dx =
∫
BR(x0)
[Aαβij (x0)− Aαβij (x)
]Dαu
iDβφj dx
+
∫
BR(x0)
fiφi dx +
∫
BR(x0)
f αi Dαφi dx
1Can be omitted on the first reading.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
290Chapter 10: Linear Elliptic PDE
for all φ ∈ H10(BR(x0)). So, we consider the above again with φ = (u − λ)η2. Now we
calculate:
∫
BR(x0)
|D[(u − λ)η]|2 dx
≤∫
BR(x0)
Aαβij (x0)Dα[(ui − λi)η]Dβ[(uj − λj)η] dx
=
∫
BR(x0)
Aαβij (x0)DαuiDβ[(uj − λj)η2]
+
∫
BR(x0)
Aαβij (x0)(ui − λi)(uj − λj)DαηDβη dx
=
∫
BR(x0)
[Aαβij (x0)− Aαβij (x)
]Dαu
iDβ[(uj − λj)η2
]dx
+
∫
BR(x0)
fi(ui − λi)η2 dx +
∫
BR(x0)
f αi Dα[(ui − λi)η2
]dx
+
∫
BR(x0)
Aαβij (x0)(ui − λi)(uj − λj)DαηDβη dx
≤ δ∫
BR(x0)
DαuiDβu
jη2 dx + 2δ
∫
BR(x0)
Dαui ·Dβη · (uj − λj)η dx
+
∫
BR(x0)
fiη(ui − λi) dx +
∫
BR(x0)
f αi Dα · uiη2 dx
+ 2
∫
BR(x0)
∑
α
f αi (ui − λi)ηDη dx + C
∫
BR(x0)
|u − λ|2|Dη|2 dx
Young’s≤ δ
∫
BR(x0)
|Du|2η2 dx + 2δν
∫
BR(x0)
|Du|2 · η2 dx
+2δ
ν
∫
BR(x0)
|u − λ|2|Dη|2 dx +1
R2
∫
BR(x0)
|u − λ|2 dx
+
∫
BR(x0)
∑
i
f 2i dx +
1
ν
∫
BR(x0)
∑
i ,α
f αi dx
+ ν
∫
BR(x0)
|Du|2η2 dx +2
ν
∫
BR(x0)
∑
i ,α
(f αi )2 dx
+ 2ν
∫
BR(x0)
|u − λ|2|Dη|2 dx + C
∫
BR(x0)
|u − λ|2|Dη|2 dx
≤ (δ + 2δν + ν)
∫
BR(x0)
|Du|2η2 dx +3
ν
∫
BR(x0)
∑
i ,α
(f αi )2 dx
+1 + 2ν−1δ + 2ν + C
R2
∫
BR(x0)
|u − λ|2 dx
+ R2
∫
BR(x0)
∑
i
f 2i dx.
(10.10)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates291
In case it is unclear, δ is the standard free constant of continuity; and C bounds all Aαβij on
BR
(x0). Also, ν and some R2 terms arise via Young’s inequality. Next, choose a specific η
such that η ≡ 1 on R/2. Now, we consider
∫
BR/2(x0)
|Du|2 dx
≤∫
BR/2(x0)
|Du|2η2 dx
=
∫
BR/2(x0)
(D[(u − λ)η])2 − (u − λ)2(Dη)2 − 2DuDη(u − λ)η dx
Young’s≤∫
BR/2(x0)
|D[(u − λ)η]|2 dx + 2ν
∫
BR/2(x0)
|Du|2η2 dx
+
(1 +
2
ν
)∫
BR/2(x0)
|u − λ|2|Dη|2 dx.
Solving this last inequality algebraically and utilizing the properties of η, we get
(1− 2ν)
∫
BR/2(x0)
|Du|2 dx ≤∫
BR/2(x0)
|D[(u − λ)η]|2 dx
+ν + 2
νR2
∫
BR/2(x0)
|u − λ|2 dx.
Marching on, we put in estimate (10.10) into the RHS of the above (remembering that we
now put in R/2 instead of R into the estimate) to ascertain
(1− 2ν)
∫
BR/2(x0)
|Du|2 dx
≤ (δ + 2δν + ν)
∫
BR/2(x0)
|Du|2η2 dx +3
ν
∫
BR/2(x0)
∑
i ,α
(f αi )2 dx
+2ν2 + (C + 2)ν + 2δ + 3
R2
∫
BR/2(x0)
|u − λ|2 dx
+ R2
∫
BR/2(x0)
∑
i
f 2i dx
Finally solving algebraically we conclude that
(1− 3ν − δ(1− 2ν))
∫
BR/2(x0)
|Du|2 dx
≤ 3
ν
∫
BR(x0)
∑
i ,α
(f αi )2 dx +2ν2 + (C + 2)ν + 2δ + 3
R2
∫
BR(x0)
|u − λ|2 dx
+ R2
∫
BR(x0)
∑
i
f 2i dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
292Chapter 10: Linear Elliptic PDE
In the above we have used the properties of η and after solving algebraically, expanded the
RHS integrals to balls of radius R (obviously this can be done as we are integrating over
strictly positive quantities). This last inequality brings us to our conclusion as ν, δ, and C
are independent of u, along with the fact that δ and ν can be chosen small enough so the
LHS of the above remains positive.
10.3.2 Interior Estimates
We will now start exploring the classical theory of linear elliptic regularity. To begin, we
will go over estimation of elliptic solutions via bounding Sobolev norms, utilizing difference
quotients. The first two theorems of this subsection deal with interior estimates, while the
last theorem treats the issue of boundary regularity.
Changing the system under consideration, we will now work with an elliptic system of the
form
−Dα(Aαβij (x)Dβu
j)
= fi +Dαfαi , for i = 1, . . . , m,
where fi ∈ L2(Ω) and f αi ∈ H1loc(Ω). Basically, we will no longer assume that the inho-
mogenous term in the system is an element in H−1(Ω). Indeed, we will actually need true
weak differentiability of the inhomogenaity (i.e. a bounded Sobolev norm) to attain the full
regularity theory of these linear systems. With that, we consider a weak solution u ∈ H1loc(Ω)
of the elliptic system:∫
Ω
Aαβij DβujDαφ
i dx =
∫
Ω
(fiφi + f αi Dαφ
i) dx ∀φ ∈ H10(Ω); (10.11)
this particular weak formulation is derived by the usual method of integration by parts. Now,
we go onto our first theorem.
Theorem 10.3.4 (): Suppose that Aαβij ∈ Lip(Ω), that (10.11) is elliptic with fi ∈ L2(Ω)
and f αi ∈ H1(Ω). If u ∈ H1loc(Ω) is a weak solution of (10.11), then u ∈ H2
loc(Ω).
Proof: We start off by recalling the notation from section Section 9.8: uh(x) := u(x +
hes), where
es := (0, . . . , 1︸︷︷︸sth place
, . . . , 0).
With this we consider Ω′ ⊂⊂ Ω with h small enough so that uh(x) is defined for all x ∈ Ω′.Now, we can rewrite our weak formulation as∫
Ω′Aαβij (x + hes)Dβu
jhDαφ
i dx
=
∫
Ω′(fi(x + hes)φ
i(x) + f αi (x + hes)Dαφi(x)) dx.
Subtracting the original weak formulation from this, we have
∫
Ω′Aαβij
[Dβu
jh −Dβujh
]Dαφ
i dx +
[Aαβij (x + hes)− Aαβij (x)
h
]Dβu
jDαφi dx
=
∫
Ω′
[fi(x + hes)− f (x)i
h
]φi(x) dx +
∫
Ω′
[f αi (x + hes)− f αi (x)
h
]Dαφ
i dx.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates293
Using notation from section Section 9.8 and commuting the difference and differential op-
erator on u, we obtain
∫
Ω′Aαβij ·Dβ(∆shu
i) ·Dαφj + ∆shAαβij ·Dβuj ·Dαφi dx
=
∫
Ω′∆shfi · φi + ∆shf
αi ·Dαφi dx.
We can algebraically manipulate this to get
∫
Ω′Aαβij ·Dβ(∆shu
j) ·Dαφi
=
∫
Ω′∆shfi · φi + (∆shf
αi − ∆shA
αβij ·Dβuj)Dαφi dx. (10.12)
At this point, we would like to apply Caccioppoli’s inequality to the above, but the first term
on the RHS is not necessarily bounded; in order to proceed we now move to prove it is indeed
bounded. So, let us look at this term with φi := η2∆shui , where η has the standard definition.
After some algebraic tweaking around, we see that
∫
Ω′fi · ∆s−h(η2 · ∆shui) dx
=
∫
Ω′fiη · ∆shui · ∆s−hη dx +
∫
Ω′fi · η(x − hes) · ∆s−h(η · ∆shui) dx.
We now estimate the second term on the RHS by applying Young’s inequality, a derivative
product rule, and difference quotients bounds with derivatives. The ultimate result is:
∫
Ω′fi · ∆s−h(η2 · ∆shui) dx
Young’s≤∫
Ω′fiη · ∆shui · ∆s−hη dx
+1
ν
∫
Ω′|fiη|2 dx + ν
[∫
Ω′η2|D(∆shu
i)|2 dx +
∫
Ω′|Dη|2|∆shui |2 dx
]
≤∫
Ω′fiη · ∆shui · ∆s−hη dx
+ C(R)
(supi||fi ||L2(Ω′) + sup
i||ui ||
H1(Ω′)
+ supi
∫
Ω′|D(∆shu
i)|2 dx).
Holder≤ C(R)
([supi||fi ||L2(Ω′) + sup
i||ui ||
H1(Ω′) + 1
]2
+ supi
∫
Ω′|D(∆shu
i)|2 dx).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
294Chapter 10: Linear Elliptic PDE
Next, this estimate is applied to (10.12) yields∫
Ω′Aαβij ·Dβ(∆shu
j) ·Dαφi
≤∫
Ω′fiη · ∆shui · ∆s−hη + (∆shf
βj − ∆shA
αβij ·Dαui)Dβφj dx
+ C(R)
([supi||fi ||L2(Ω′) + sup
i||ui ||
H1(Ω′) + 1
]2
+ supi
∫
Ω′|D(∆shu
i)|2 dx).
Finally, we now can apply Caccioppoli’s inequality to this despite the fact that the C(R) term
is not accounted for. Basically, we keep the C(R) term as is; and reflecting on the proof of
Caccioppoli’s inequality it is obvious that we can indeed say∫
BR/2(x0)
|D(∆shui)|2
≤ C(R)
(1
R2
∫
BR(x0)
|∆shui |2 dx +
∫
BR(x0)
∑
j,α
|∆shAαβij |2|Dβui |2 dx
+
∫
BR(x0)
∑
α
|∆shf αi |2 dx +
[supi||fi ||L2(R)
+ supi||ui ||
H1(R)+ 1
]2).
Thus, we have finally bounded the difference of the derivative of u. By proposition 9.8.1 we
have shown u ∈ H2loc(Ω).
Now, we move onto the next theorem which generalizes the first.
Theorem 10.3.5 (Interior Sobolev Regularity): Suppose that Aαβij ∈ Ck,1(Ω), that (10.11)
is elliptic with fi ∈ Hk(Ω) and f αi ∈ Hk+1(Ω). If u ∈ H1loc(Ω) is a weak solution of
−Dα(Aαβij (x)Dβu
j)
= fi −Dαf αi , for i = 1, . . . , m,
then u ∈ Hk+2loc (Ω).
Proof: We append the proof of theorem (10.3.4), i.e. we have shown that u ∈ H2loc(Ω).
Now, let us set φi := Dsψi , where ψi ∈ C∞0 (Ω). Upon integrating by parts the weak
formulation we ascertain
−∫
Ω
Ds(Aαβij Dβu
j)Dαψi dx =
∫
Ω
fiDsψi −∫
Ω
Ds(fαi )Dαψ
i dx.
Upon carrying out the Ds derivative on the RHS and algebraically manipulating we conclude∫
Ω
Aαβij Dβ(Dsuj)Dαψ
i dx = −∫
Ω
Ds(Aαβij )Dβu
jDαψi dx
−∫
Ω
fiDs(ψi) dx +
∫
Ω
Ds(fαi )Dαψ
i dx
=
∫
Ω
[−Ds(Aαβij )Dβuj − fi −Ds f αi ]Dαψ
i dx
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.3 Elliptic Regularity I: Sobolev Estimates295
Now, in the case of the theorem of k = 1, we have the assumptions that DsAαβij ∈ Lip(Ω),
fi ∈ H1loc(Ω) and f αi ∈ H2
loc(Ω), i.e. we can integrate the LHS of the last equation to as-
certain that Dsui ∈ H2
loc(Ω) or u ∈ W 3,2loc (Ω). We get our conclusion by applying a standard
induction argument to the above method.
From this last theorem, it is clear that the regularity of the homogeneity and coefficients
determine the regularity of our weak solution. We note particularly that if Aαβij , fi , fαi ∈
C∞(Ω) then u ∈ C∞(Ω). More specifically, if Aαβij = const., fi ≡ 0, and f αi ≡ 0, then for
any BR ⊂⊂ Ω and for any k , it can be seen from the above proofs that
||u||Hk(BR/2) ≤ C(k,R)||u||L2(BR). (10.13)
10.3.3 Global Estimates
Now we move our discussion to the boundary of Ω. Fortunately, we have done most of the
hard work already for this situation.
Theorem 10.3.6 (): Suppose that ∂Ω ∈ Ck+1, Aαβij ∈ Ck(Ω), fi ∈ Hk(Ω) and f αi ∈Hk+1(Ω). If u is a weak solution of the Dirichlet-problem, ie. u ∈ H1
loc(Ω) solves
∫
Ω
Aαβij DβujDαφ
i dx =
∫
Ω
(fiφi + f αi Dαφ
i) dx
∀φ ∈ H10(Ω) with u = u0 on ∂Ω, then u ∈ Hk+1(Ω) and moreover
||u||Hk+1(Ω)
≤ C(Ω)
[||f ||
Wk−1,2(Ω)+∑
i ,α
||f αi ||Hk (Ω)+ ||u||
L2(Ω)
].
Proof:Setting v = u−u0, it is clear that v satisfies the same system with a homogeneous
boundary condition. So, without loss of generality, we assume u0 = 0 in our problem. Since
∂Ω ∈ Ck+1, we know for any x0 ∈ ∂Ω, there exists a diffeomorphism
Φ : B+R → W ∩Ω
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
296Chapter 10: Linear Elliptic PDE
of class Ck+1 which maps ΓR onto M ∩ ∂Ω.
#
1 1 '( ) $ " Ω * " 71
'(. 15 " * ?)@ 6 ! 6 ! 8 ,%,D7* )) "$#&
∂Ω ∈ Ck+1 # Aαβij ∈ Ck(Ω) # fi ∈ Hk(Ω)# $ 3 fαi ∈ Hk+1(Ω) ' +5 u (+* # 0 )6# *7 898 (/7 $ 75 ") (; ( . " 8 ) , ;7 8 ) # ( ) 'u ∈ H1& ' (Ω) *7 8 H ) *
∫
ΩAαβij Dβu
jDαφi dx =
∫
Ω(fiφ
i + fαi Dαφi) dx
∀φ ∈ H10 (Ω) 02( " u = u0 7 $ ∂Ω # ") $ u ∈ Hk+1(Ω)
# $43 7; ) 7 H ) ;
||u||Hk+1(Ω)
≤ C(Ω)
||f ||
Wk−1,2(Ω)+∑
i,α
||fαi ||Hk(Ω)+ ||u||
L2(Ω)
.
% ;7<765 v = u − u0 ( ( v
' " '1 '( $ " 1 %. & +" 1
' u0 = 0
$ ' ∂Ω ∈ Ck+1 -1 21 " x0 ∈ ∂Ω ) ! '( '
Φ : B+R →W ∩ Ω
6 Ck+1 1 ' ΓR M ∩ ∂Ω
Φ
B+R
xn
Rn
ΓRx′
x0
Ω
∂Ω
W = Φ(B+R)
. ? 1! #"4 "$ ! '( '
'( ' Φ 4 ' Φ∗ $ " (Φ∗u)(y) := u(Φ(y))
1 y ∈ B+R
Φ 6& Ck+1 DC Φ∗
) '( ' $ 1 Hk+1(B+R ) Hk+1(Ω ∩W )
© !
Figure 10.2: Boundary diffeomorphism
The diffeomorphism Φ induces a map Φ∗ defined by (Φ∗u)(y) := u(Φ(y)) with y ∈B+R . Since Φ is of class Ck+1, the chain-rule dictates that Φ∗ is an isomorphism between
Hk+1(B+R ) and Hk+1(Ω ∩ W ). Upon defining u∗ := Φ∗u, the chain-rule again says that
Du(Φ(y)) = J−1Φ Du∗(y), where JΦ is the Jacobian matrix of Φ. Recalling multi-dimensional
calculus, we see that u∗ satisfies∫
B+R
Aαβij Dβui∗Dαφ
j dy =
∫
B+R
fiφi dy +
∫
B+R
f αi Dαφi dy, (10.14)
where
Aαβij = |det(JΦ)|J−1Φ Aαβij JΦ
f αi = |det(JΦ)|J−1Φ Φ∗f αi
fi = |det(JΦ)|Φ∗fi .
From this we see that Aαβij is a positive multiple of a similarity transformation of Aαβij , i.e.
Aαβij is essentially a coordinate rotation of the original coefficients and thus are still elliptic.
After all that, we have reduced the problem to that of u∗ = 0 on ΓR. Now, for ever
direction but xn we can use the previous proof to bound the difference quotient. Specifically,
we take Ψi = ∆s−h(η2∆shui∗) for s = 1, . . . , n − 1 and η ∈ C∞c (BR(x0)) to ascertain
||∆sh(Dαui∗)||L2(B+
R/2)≤ C
(||ui∗||H1(B+
R)
+ supi||fi ||
L2(B+R
)+ sup
α,i||f αi ||H1(B+
R)
).
precisely as we did for the interior case. This only leaves Dnnu∗ left to be handled. Looking
at (10.14) and writing the whole thing out, we obtain
∫
B+R
Annij Dnuj∗Dnφ
i dy
= −n−1∑
α,β=1
∫
B+R
Aαβij Dβuj∗Dαφ
i dy +
∫
B+R
fiφi dy +
∫
B+R
f αi Dαφi dy.
This is an explicit representation of Dnnu∗ in the weak sense. Since the RHS is known to
exist, we thus conclude that so must Dnnu∗. Now, we can apply the induction and covering
arguments employed in earlier proofs of this section to obtain the result. Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates297
10.4 Elliptic Regularity II: Schauder Estimates
In the last section we considered regularity results that pertained to estimates of Sobolev
norms. As was eluded to at the beginning of the last section, these estimates provide a start-
ing point for more useful estimates on linear elliptic systems. Namely, we now have enough
tools to derive the classical Schauder estimates of linear elliptic systems. The power of these
estimates is that they give us bounds on the Holder norms of solutions with relatively light
criteria placed upon the weak solution being considered and the coefficients of the system.
We will go through a gradual series of results, starting with the relatively specialized case of
systems with constant coefficient working up to results pertaining to non-divergence form
systems with Holder continuous coefficients.
Notation: For some of the following proofs, the indices have been ommitted when they
have no affect on the logic or idea of the corresponding proof. Such pedantic notation is
simply too distracting when it is not needed for clarity.
To start off this section we will first go over a very useful lemma that will be used in
several of the following proofs.
Lemma 10.4.1 (): Consider Ψ(ρ) non-negative and non-decreasing. If
Ψ(ρ) ≤ A[( ρR
)α+ ε]
Ψ(R) + BRβ ∀ρ < R ≤ R0, (10.15)
with A,B,α, and β non-negative constants and α > β, then ∃ε0(α, β, A) such that
Ψ(ρ) ≤ C(α, β, A)[( ρR
)αΨ(R) + Bρβ
]
is true if (10.15) holds for some ε < ε0.
Proof: First, consider set 0 < τ < 1 and set ρ = τR. Plugging into (10.15) gives
Ψ(τR) ≤ ταA[1 + ετ−α]Ψ(R) + BRβ.
Next fix γ ∈ (β,α) and choose τ such that 2Aτα = τγ . Now, pick ε0 such that ε0τ−α < 1.
Our assumption now becomes
Ψ(τR) ≤ 2Aτα ·Ψ(R) + BRβ
≤ τγΨ(R) + BRβ.
Now, we can use this last inequality as a scheme for iteration:
Ψ(τk+1R) ≤ τγΨ(τkR) + B(τkR)β
≤ τγ [τγΨ(τk−1R) + B(τk−1R)β] + BτβkRβ;
and after doing this iteration k + 1 times, we ascertain
Ψ(τk+1R) ≤ τ (k+1)γΨ(R) + BτβkRβk∑
j=0
τ j(γ−β).
Since ρ < R there exists k such that τk+1R ≤ ρ < τkR. Consequent upon this choice of k :
Ψ(ρ) ≤ Ψ(τk+1R) ≤ τ (k+1)γΨ(R) + C · B(τkR)β
≤( ρR
)γΨ(R) + C · B
( ρτ
)β.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
298Chapter 10: Linear Elliptic PDE
Remark: The above proof might seem to be grossly unintuitive at first, but in reality given
so many free constants, the proof just systemattically selects the constants to apply the
iterative step that sets the exponents correctly.
The value of the above lemma is that it allows us to replace the BRβ term with a Bρβ,
in addition to letting us drop the ε term. Thus Ψ can be bounded by a linear combination of
ρα and ρβ via this lemma; a very useful tool for reconciling Companato semi-norms (as one
can intuitively guess from definitions).
10.4.1 Systems with Constant Coefficients
Moving on to actual regularity results, we first consider the simplest system where the co-
efficients are constant. After going through the regularity results, we will briefly explore the
very interesting consequences of such.
Theorem 10.4.2 (): If u ∈ W 1,ploc (Ω) is a weak solution of
Dα(Aαβij Dβuj) = 0, (10.16)
where Aαβij = const. and satisfy the L-H condition, then for some fixed R0, we have that for
any ρ < R < R0,
i.)
∫
Bρ(x0)
|u|2 dx ≤ C ·( ρR
)n ∫
BR(x0)
|u|2 dx. (10.17)
ii.) Moreover,
∫
Bρ(x0)
|u − ux0,ρ|2 dx ≤ C ·( ρR
)n+2∫
BR(x0)
|u − ux0,R|2 dx.
(10.18)
Proof.(10.17): First, if ρ ≥ R/2 we can clearly pick C such that both of the inequalities above
are true. So we consider now when ρ ≤ R/2. Since u is a solution of (10.16), we
recall the interior Sobolev estimate from the last section (theorem 10.3.5) to write
||u||Hk(BR/2(x0)) ≤ C(k,R) · ||u||L2(BR)
Thus for ρ < R/2 we fine
∫
Bρ(x0)
|u|2 dx Holder≤ C(n) · ρn · supBρ(x0)
|u|2
≤ C(n) · ρn · ||u||Hk(BR/2(x0))
≤ C(n, k, R) · ρn ·∫
BR(x0)
|u|2 dx
The second inequality comes from the fact that Hk(Ω) ⊂−→ C1(Ω) ⊂ L∞(Ω) for k
choosen such that k > np (see Section 9.7). Also, we can employ a simple rescaling
argument to see that C(n, k, R) = C(k)R−n.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates299
(10.18): This essentially is a consequence of (10.17) used in conjunction with Poincare and
Caccioppoli inequalities:∫
Bρ(x0)
|u − uρ,x0 |2 dxPoincare≤ C · ρ2
∫
Bρ(x0)
|Du|2 dx
≤ C · ρ2( ρR
)n ∫
BR/2(x0)
|Du|2 dx
= C ·( ρR
)n+2R2
∫
BR/2(x0)
|Du|2 dx
Caccioppoli≤ C ·( ρR
)n+2∫
BR(x0)
|u − ux0,R|2 dx.
Specifically, we have used (10.17) applied to Du to justify the second inequality. We
can do this since Du also solves (10.16) since the coefficients are constant.
The above theorem has some very interesting consequences, that we will now go into.
First, we note that theorem (10.4.2) holds for all derivatives Dαu, as derivatives of a solution
of a system with constant coefficients is itself a solution. With that in mind, let us consider
the polynomial Pm−1 such that∫
Bρ(x0)
Dα(u − Pm−1) dx = 0
for all |α| ≤ m − 1. With this particular choice of polynomial, we can now apply Poincare’s
inequality m times to obtain∫
Bρ(x0)
|u − Pm−1|2 dxPoincare≤ Cρ2m
∫
Bρ(x0)
|Dmu|2 dx.
Since Dmu also solves (10.16) we apply theorem (10.4.2) to get∫
Bρ(x0)
|u − Pm−1|2 dx ≤( ρR
)n+2m∫
BR(x0)
|u|2 dx.
Now, if we place the additional stipulation on u that |u(x)| ≤ A|x |σ, σ = [m] + 1 in addition
to its solving (10.16), then by letting R→∞ in the last equation, we get∫
Bρ(x0)
|u − Pm−1|2 dx = 0.
Thus,
If the growth of u is polynomial, then u actually is a polynomial!
This is a very interesting result in that we are actually able to determine the form of u merely
from asymptotics!
We now look at non-homogenous elliptic systems with constant coefficients. For this,
suppose u ∈ W 1,ploc (Ω) is a solution of
Dα(Aαβij Dβuj) = −Dαf αi , i = 1, . . . , m (10.19)
where the Aαβij are again constant and satisfy a L-H Condition. In this situation, we have the
following result:
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
300Chapter 10: Linear Elliptic PDE
Theorem 10.4.3 (): Suppose f ∈ L2,λloc (Ω), then Du ∈ L2,λ
loc (Ω) for 0 ≤ λ < n + 2.
Proof: We start by splitting u in BR ⊂ Ω such that u = v + (u − v) = v +w where v is
the soluton (which exists by Lax-Milgram, see section Section 10.2) of the Dirichlet BVP:Dα(AαβDβv
j) = 0 i = 1, . . . , m in BRv = u on ∂BR.
Since this system has constant coefficients, Dv also solves this system locally for all ρ < R.
Thus, by theorem 10.4.2.ii), we have∫
Bρ(x0)
|Dv − (Dv)x0,ρ|2 dx ≤ C( ρR
)n+2∫
BR(x0)
|Dv − (Dv)x0,ρ|2 dx.
Using this, we start our calculations:∫
Bρ(x0)
|Du − (Du)x0,ρ|2 dx
=
∫
Bρ(x0)
|Dv −Dw − (Dv)x0,ρ + (Dw)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Dv − (Dv)x0,ρ|2 dx +
∫
Bρ(x0)
|Dw − (Dw)x0,ρ|2 dx.
Next, we use the fact that v = u − w .∫
Bρ(x0)
|Du − (Du)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du −Dw − (Du)x0,ρ + (Dw)x0,ρ|2 dx
+
∫
Br
|D(w)− (Dw)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,ρ|2 dx
+ 2
∫
Bρ(x0)
|D(w)− (Dw)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,ρ|2 dx + 2
∫
Bρ(x0)
|D(u − v)|2 dx
+ 2
∫
Bρ(x0)
|(Dw)x0,ρ|2 dx.
(10.20)
Looking at the last term on the RHS, we see
∫
Bρ(x0)
|(Dw)x0,ρ|2 dx ≤ 1
ρ
(∫
Bρ(x0)
|Dw | dx)2
Holder≤∫
Bρ(x0)
|Dw |2 dx
=
∫
Bρ(x0)
|D(u − v)|2 dx
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates301
This inequality combined with (10.24), we see that
∫
Bρ(x0)
|Du − (Du)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,ρ|2 dx + C
∫
BR(x0)
|D(u − v)|2 dx. (10.21)
To bound the second term on the RHS, we use the L-H condition to ascertain∫
BR(x0)
|D(u − v)|2 dx ≤∫
BR(x0)
A ·D(u − v) ·D(u − v) dx
≤ C
∫
BR(x0)
(f − fx0,R)D(u − v) dx.
The second equality above comes from noticing that since u solves (10.19) in the weak
sense, u − v solves
Dα(Aαβij Dβ(u − v)j) = −Dα(f iα − fx0,R), i = 1, . . . , m
also in the weak sense (also we have taken u − v ∈ W 1,ploc (Ω) to be the test function). From
this, we see that∫
BR(x0)
|D(u − v)|2 dx ≤∫
BR(x0)
|f − fx0,R|2 dx,
i.e., upon combining this with (12.16),
∫
Bρ(x0)
|Du − (Du)x0,ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,R|2 dx + C[f ]2L2,λ(Ω)R
λ.
To attain our result, we now apply lemma 10.4.1 to the above to justify replacing the second
term on the RHS by ρλ.
Looking closer at the above proof, it is clear that we have proved even more than the
result stated in the theorem. Specifically, we have shown the bound
[Du]L2,λ(Ω) ≤ C(Ω,Ω′)[||u||H1(Ω) + [f ]L2,λ(Ω)
]with Ω′ ⊂⊂ Ω
to be true. An immediate consequence of theorem 10.4.3 and Companato’s theorem is
Corollary 10.4.4 (): If f ∈ C0,µloc (Ω), then Du ∈ C0,µ
loc (Ω), where µ ∈ (0, 1).
10.4.2 Systems with Variable Coefficients
Theorem 10.4.5 (): Suppose u ∈ W 1,ploc (Ω) is a solution of (10.19 with Aαβij ∈ C0(Ω) which
also satisfy a L-H condtion. If f iα ∈ L2,λ(Ω) (∼= L2,λ(Ω) for 0 ≤ λ ≤ n), then Du ∈ L2,λloc (Ω).
Proof: We first rewrite the elliptic system as
Dα
(Aαβij (x0)Dβu
j)
= Dα
(Aαβij (x0)− Aαβij (x)
)Dβu
j + f αi
=: DαF
αi ,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
302Chapter 10: Linear Elliptic PDE
where we have defined a new inhomogeneity. Now, we essential have the situation in theorem
(10.4.3) with DαFαi as the new inhomogenity. Thus, using the exact method used in that
theorem we ascertain
∫
Bρ(x0)
|Du|2 dx
≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|D(u − v)|2 dx
≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|F |2 dx
≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|[A(x)− A(x0)]Du|2 dx
+ C ·∫
BR(x0)
|f |2 dx
≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C · ω(R)
∫
BR(x0)
|Du|2 dx
+ C ·∫
BR(x0)
|f |2 dx
Now∫BR(x0) |f |2 dx ≤ Rλ as f ∈ L2,λ(Ω). With that in mind, if we choose R0 sufficiently
small with ρ < R < R0, then ω(R) is small and again we can apply lemma 10.4.1 with ω(R)
corresponding to the ε term; this gives us the result.
Theorem 10.4.6 (): Let u ∈ W 1,ploc (Ω) be a solution of
Dα
(Aαβij (x)Dβu
j)
= Dαfαi j = 1, . . . , m
with Aαβij ∈ C0,µ(Ω) which satisfy a L-H condition. If f iα ∈ C0,µ(Ω), then Du ∈ C0,µloc (Ω) and
moreover
[Du]C0,µ(Ω) ≤ C||u||H1(Ω) + [f ]C0,µ(Ω)
.
Proof: Again, we use the exact same idea as in the previous proof to see that
∫
Bρ(x0)
|Du − (Du)ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,r |2 dx
+
∫
BR(x0)
|f − fR|2 dx + C
∫
BR(x0)
|[A(x)− A(x0)]Du|2 dx.
(10.22)
To bound this we start out by looking at the first term on the RHS of the above:
∫
BR(x0)
|f − fR|2 dx ≤∫
BR(x0)
|x − x0|2µ · [f ]2C0,µ(BR(x0)) dx
≤ CR2µ+n.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates303
Now we look at the second term on the RHS of (10.22) to notice
C
∫
BR(x0)
|[A(x)− A(x0)] ·Du|2 dx
≤ C∫
BR(x0)
||x − x0|µ · [A]C0,µ(BR(x0)) ·Du|2 dx
≤ CR2µ ·∫
BR(x0)
|Du|2 dxHolder≤ CR2µ+n||u||W 1,2(BR(x0))
≤ CR2µ+n
Putting these last two estimates into (10.22) yields
∫
Bρ(x0)
|Du − (Du)ρ|2 dx
≤ C( ρR
)n+2∫
BR(x0)
|Du − (Du)x0,r |2 dx + CR2µ+n.
Per usual this fits the form required by lemma 10.4.1 and consequently this leads us to our
conclusion.
10.4.3 Interior Schauder Estimates for Elliptic Systems in Non-divergence
Form
In this section, we consider with systems in non-divergence form. Obviously, if A is continu-
ously differentiable, there is no difference between the divergent and non-divergent case; but
in this subsection we will consider the situation where A is only continuous. I.e. we consider
Aαβij DβDαuj = fi j = 1, . . . , m. (10.23)
with u ∈ H2loc(Ω) and Aαβij ∈ C0,µ(Ω). In this situation, we have the following theorem.
Theorem 10.4.7 (): If f i ∈ C0,µ(Ω) then DβDαu ∈ C0,µ(Ω); moreover,
[DβDαu
]C0,µ(Ω)
≤ C[||DβDαu||L2(Ω) + [f ]C0,µ(Ω)
].
Proof: Essentially, we follow the progression of proofs of the last subsection, each of
which remain almost entirely unchanged. Corresponding to the progression of the previous
section we consider the proof in three main steps.
Step 1: Suppose A = const. and f = 0, then we are in a divergence form situation and
therefore
∫
Bρ(x0)
|DβDαu − (DβDαu)x0,ρ|2 dx
≤ C ·( ρR
)n+2∫
BR(x0)
|DβDαu − (DβDαu)x0,ρ|2 dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
304Chapter 10: Linear Elliptic PDE
Step 2: Suppose A = const and f 6= 0. Again we seek to get this into a divergence form; to
this end we set
z i(x) := ui(x)−(DβDαu
i)x0,R
2· xβ · xα.
Thus,
DσDγzi = DσDγu
i − (DσDγui)x0,R,
plugging this into (10.23) yields
Aαβij DβDαzj = fi − (fi)x0,R.
As in the previous section, we now split z = v + (z − v) where v is a solution of of the
homogenous equation:
AαβDβDαv
j = 0 in BR(x0)
v j = z j on ∂BR(x0).
With that, (10.23) indicates
Aαβij DβDα(z j − v j) = fi − (fi)x0,R in BR(x0)
z j − v j = 0 on ∂BR(x0)
As in the divergence form case then, we have the estimate∫
BR(x0)
|D2(z − v)|2 dx ≤∫
BR(x0)
|f − fx0,R|2 dx.
The spatial indices have been dropped above under the assumption that the supremum
of i is taken on the RHS to bound all second order derivatives (as stated at the beginning
of this section, writing this triviality out serves merely as a distraction at this point).
Using this estimate we ascertain
∫
Bρ(x0)
|DβDαz − (DβDαz)x0,ρ|2 dx
≤ C ·( ρR
)n+2∫
BR(x0)
|DβDαz − (DβDαz)x0,R|2 dx
+ C
∫
BR(x0)
|f − fx0,R|2 dx,
i.e.
∫
Bρ(x0)
|DβDαu − (DβDαu)ρ|2 dx
≤ C ·( ρR
)n+2∫
BR(x0)
|DβDαu − (DβDαu)x0,R|2 dx
+ C
∫
BR(x0)
|f − fx0,R|2 dx
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates305
Step 3: The proof now continues as in the divergence form case by writing the equation
Aαβij (x)DβDαuj = fi
as
Aαβij (x0)DβDαuj = [Aαβij (x0)− Aαβij (x)]DβDαu
j + fi =: Fi
and concludes by using Step 2 with Fi substituted by the RHS of (10.23).
10.4.4 Regularity Up to the Boundary: Dirichlet BVP
In this section, we consider solutions of the Dirichlet BVP
Dα
(Aαβij (x)Dβu
j)
= Dαfαi in Ω
uj = gj on ∂Ω ∈ C1,µ
with f αi ∈ C0,µ(Ω) and g ∈ C1,µ(Ω) and we shall extend the local regularity result Section
10.4.2. Again, we have already seen many of the techniques in previous proofs that we will
require here. So, we will simply sketch out these two proofs pointing out the differences that
should be considered.
Theorem 10.4.8 (): Let u ∈ W 1,p(Ω) be a solution of
Dα
(Aαβij (x)Dβu
j)
= Dαfαi j = 1, . . . , m
with Aαβij ∈ C0,µ(Ω) which satisfy a L-H condition. If f iα ∈ C0,µ(Ω), then Du ∈ C0,µ(Ω) and
moreover
[Du]C0,µ(Ω) ≤ C||u||H1(Ω) + [f ]C0,µ(Ω)
.
Proof: First, we associate u with its trivial extension outside of Ω. Given the interior
result of theorem 10.4.6, we just have to prove boundary regularity to get the global result.
So, as in the Sobolev estimate case, we consider a zero BVP without loss of generality as
the transformation u 7→ u − g does not change our system. Also, in the same vein as the
global Sobolev proof, we know that the assumption ∂Ω ∈ C1,µ says that (locally) there exists
a diffeomorphism
Ψ : U → B+R
which flattens the boundary i.e. maps ∂Ω onto ΓR (see figure (10.3)).
In this proof we will be considering B+R with the goal of finding estimates on B+
R′ , where
R′ = µR for some fixed constant µ < 1. Then the global estimate will follow by using a
standard covering argument.
So, with that in mind, we may suppose that Ω = B+R and ∂Ω = ΓR. Generally speak-
ing, we seek the analogies of (10.17) and (10.18); once these are established the completion
of the proof mimics the progression of the proofs of interior estimates. To prove these
estimates we proceed in a series of steps.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
306Chapter 10: Linear Elliptic PDE
%
1 fαi ∈ C0,µ(Ω) g ∈ C1,µ(Ω) 1 - %- .6 +"
+ ? 1 ! ! -1 " ' " C 1 1 -1 1 ' " , 15 % $ ?%@ 6 ! A ! ) u ∈W 1,p(Ω) ) # *7 898 (/7 $ 75
Dα
(Aαβij (x)Dβu
j)
= Dαfαi j = 1, . . . ,m
02( " Aαβij ∈ C0,µ(Ω) 0 " ( . " * #& (+*+5 # . 7 $ 3 ( ( 7 $ ' +5 f iα ∈ C0,µ(Ω) # ") $Du ∈ C0,µ(Ω)
# $43 7; ) 7 H ) ;[Du]C0,µ(Ω) ≤ C
||u||H1(Ω) + [f ]C0,µ(Ω)
.
% ;67<75 .1 - u 1( ( Ω ' ? 1! ? 1 7 $ " .6 #" .% $ $ ' 1 3
" " 1 %.& & +" & ' u 7→ u − g " ' %& &' .% $ $ -1 21 ' ∂Ω ∈ C1,µ & " %- " '( '
Ψ : U → B+R
1 $ " ' ∂Ω ΓR ! . ? 1 @
B+R
xn
Rn
ΓRx′
Ψ
x0
Ω
∂Ω
U
. ? 1 @ #"4 " ! '( ' - 1 1 $ B+
R1 ( * '
B+R′71 R′ = µR &' µ < 1 .% $
' 1 %1 $ " ) .'
Figure 10.3: Boundary diffeomorphism
Step 1: First, suppose Aαβij = const. and f αi = 0. For ρ < R/2, we have
∫
B+ρ
|Du|2 dx Holder≤ Cρn supB+ρ
|Du|2
≤ C(k) · ρn||u||Hk(B+R/2
)
≤ C(k) · ρn||u||Hk(B∗)
≤ D(R) · ρn∫
B∗|u|2 dx
≤ D(R) · ρn∫
B+R
|u|2 dx
Caccioppoli≤ C(R) · ρn∫
B+R
|uxn |2 dx,
where B+R/2⊂ B∗ ⊂ B+
R and ∂B∗ is C∞. This result corresponds to (10.17). The
fourth inequality in the above calculation comes from the interior Sobolev regularity
estimate of the previous section; hence B∗ was introduced into the above calculation
to satisfy the boundary requirement required for the global Sobolev regularity. Also,
as with the derivation of (10.17), the second inequality comes from the continuous
imbedding Hk(B+R/2
) ⊂−→ Ck−1,1−n/2(B+R/2
) ⊂ L∞(B+R/2
).
The counterpart of (10.18) is
∫
B+ρ
|Du − (Du)ρ|2 dxPoincare≤ Cρ2
∫
B+ρ
|D2u|2 dx
Caccioppoli≤ Cρn+2
∫
B+R
|uxn |2 dx.
To proceed, we note that the same calculation can be carried out for u−λxn (which is
also a solution with zero boundary value). For this, we have to replace uxn by uxn − λ(λ = constant) in the above estimate.
Step 2: Now, that we have a way to bound I(B+ρ ) (the integral over B+
ρ ) with ρ < R/2, we
now need to bound I(Bρ(z)) for z ∈ BR′(x0) with ρ < R/2. To do this, we need to
consider two cases for balls centered at z .Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates307
%
? = 1 ) (z,Γ) = r ≤ µR 1 Bρ(z) ⊂ B+R (x0)
rz
x0y
BR/2(z)
B+R(x0)
B+R′(x0)
B+2r(y)
Br(z)
. ? 1 ? ' ? 1 1 ' I(Bρ(z))
$ "I(Br(z))
I(Br(z))$ "I(B+
2r(y)) * " I(B+
2r(y))$ "
I(BR/2(z))
∫
Bρ(z)|u− uz,ρ|2 dx
≤ C(ρr
)n+2∫
Br(z)|u− uz,r|2 dx
≤ C(ρr
)n+2∫
B+2r(y)|u− uz,2r|2 dx
≤ D(ρr
)n+2(
2r√R2/4− r2
)n+2 ∫
B+√R2/4−r2
(y)|u− uz,R/2|2 dx
≤ D(
414 − µ2
)n+2 (ρr
)n+2 ( rR
)n+2∫
BR/2(z)|u− uz,R/2|2 dx.
* #" 1 *% " ' ? u - &' 1 (z,Γ) = r < µR 1 Bρ(z) *B+R (x0)
1 ' ' $ 1 " " 1, $ I(Bρ(z) ∩ B+
R(x0)) 1 C
u Ω
A " *1 ' I(Bρ(z) ∩
Figure 10.4: Case 1 for proof of Theorem 10.4.8
Case 1: We first consider when dist(z, Γ ) = r ≤ µR with Bρ(z) ⊂ B+R (x0).
For this, we first estimate I(Bρ(z)) by I(Br (z)), then I(Br (z)) by I(B+2r (y)) and
finally I(B+2r (y)) by I(BR/2(z)):
∫
Bρ(z)
|u − uz,ρ|2 dx
≤ C(ρr
)n+2∫
Br (z)
|u − uz,r |2 dx
≤ C(ρr
)n+2∫
B+2r (y)
|u − uz,2r |2 dx
≤ D(ρr
)n+2(
2r√R2/4− r2
)n+2 ∫
B+√R2/4−r2
(y)
|u − uz,R/2|2 dx
≤ D(
414 − µ2
)n+2 (ρr
)n+2 ( rR
)n+2∫
BR/2(z)
|u − uz,R/2|2 dx.
In the third inequality we use the analogy of the result form Step 1 for u (the
proof is the same).
Case 2: In the second case we have dist(z, Γ ) = r < µR with Bρ(z) * B+R (x0). First we
remember that we really only want to find a bound for I(Bρ(z) ∩ B+R (x0)) as we
are considering a trivial extension of u outside Ω. Heuristically, we first estimate
I(Bρ(z)∩B+R (x0)) by I(B+
2ρ(y)) and finally I(B+2ρ(y)) by I(BR/2(z)). Again, the
calculation follows that of Case 1.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
308Chapter 10: Linear Elliptic PDE
It is a strait-forward geometrical calculation that µ can be taken to be 12√
17in order to
ensure B+2r (y) ⊂ BR/2(z), that Step 1 can be applied in the above cases, and that all the
above integrations are well defined (i.e. the domains of integration only extend outside of
B+R (x0) over Γ where the trivial extension of u is still well defined). Since µ is independent
of the original boundary point (x0), we can thus apply the standard covering argument to
conclude the analogies of (10.17) and (10.18) do indeed hold on the boundary.
10.4.4.1 The nonvariational case
We look at a solution of
Aαβ(x)ujxα,xβ = f i ∈ C2,µ in Ω
u = g ∈ C2,µ on ∂Ω ∈ C2,µ
As in 9.8.1 we find the equivalent of (10.17):
∫
B+ρ
|DβDαu|2 dx ≤ C( ρR
)n ∫
B+R
|DβDαu|2 dx
∫
B+ρ
|DβDαu − (DβDαu)ρ|2 dx ≤ C( ρR
)n+2∫
BR(x0)
|DβDαu − (DβDαu)ρ|2 dx (10.24)
moreover we have
||u||C2,µ(Ω) ≤ C||f ||C0,µ(Ω) + ||DβDαu||L2(Ω)
.
The proof is the same as in 9.8.1 with the only exception that in the scond step instead of
z = u − 1/2(uxα,xβ)x0,R · xα · xβ, one takes
z = u − 1
2(amnij )−1(f j)x0,R · x2
β ,
then there appears a term
∫
BR(x0)
(f − fx0,R)2 dx on the RHS of (10.19).
10.4.5 Existence and Uniqueness of Elliptic Systems
We finally show how the above a prior estimate implies existence. The idea is to use the
continuation method.
Consider
Lt := (1− t)∆u + t · Lu = f in Ω
u = 0 on ∂Ω
where L is the given elliptic differential operator.
We assume that for Lt (t ∈ [0, 1]) and boundary values zero, we have uniqueness. Set
Σ := t ∈ [0, 1]|Lt is uniquely solvable.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.4 Elliptic Regularity II: Schauder Estimates309
Since t = 0 ∈ Σ, Σ 6= ∅.
We shall show that Σ is open and closed and therefore Σ = [0, 1]. Especially we conclude
(for t = 1) that
Lu = f in Ω
u = 0 on ∂Ω
has a unique solution.
From the regularity theorem, we know that if Aαβ ∈ C0,µ and f ∈ C0,µ then
||DβDαu||C0,µ(Ω) ≤ C||f ||C0,µ(Ω) + ||DβDαu||L2(Ω)
.
Now the first step is to show the
Theorem 10.4.9 (): Suppose for the elliptic operator Lu = Aαβij uixα,xβ
we have
Lu = 0 in Ω
u = 0 on ∂Ω
has only the zero solution. Then ifLu = f in Ω
u = 0 on ∂Ω
we have
||u||H2,2(Ω) ≤ C1||f ||C0,µ(Ω).
Proof: Suppose that the theorem is not true, then there exists Aαβ(k)i j , f (k) such that for
all k
Aαβ(k)i j ξαξβ ≥ ν|ξ|2; ||A(k)||C0,µ(Ω) ≤ M
and ||f (k)||C0,µ(Ω) → 0 as k →∞.
If u(k) is a solution with f (k) and ||u(k)||H2,2(Ω) = 1 then u(k) → u as k → ∞ and Lu = 0
on Ω and u = 0 on ∂Ω i.e. u = 0, hence a contradiction.
Remark: Uniqueness of second order equations follows from Hopf’s maximum principle.
Suppose u is a solution ofAαβuxα,xβ = 0 on Ω
u = 0 on ∂Ω
If u has a maximum point in x0 ∈ Ω, then the Hessian uxα,xβ ≤ 0 and v := u + ε|x − x0|2has still a maximum point in x0 (for ε sufficiently small). But then Aαβvxα,xβ = Aαβuxα,xβ + ε
const. > 0 and that is a contradiction. One concludes that u = 0.
As 0 = t ∈ Σ, we have
||DβDαu||L2(Ω) ≤ C||f ||C0,µ(Ω)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
310Chapter 10: Linear Elliptic PDE
and therefore
||DβDαu||C0,µ(Ω) ≤ C||f ||C0,µ(Ω)
by the a-priori estimate above.
We show that Σ is closed:
Let tk ∈ Σ and tk → t as k →∞ with
Ltku
(k) = f in Ω
u(k) = 0 on ∂Ω
because ||u(k)xx ||C0,µ(Ω) ≤ C||f ||C0,µ(Ω), we have u(k) → u in C2(Ω) and
Ltu = f in Ω
u = 0 on ∂Ω
therefore t ∈ Σ.
Now we show that Σ is also open:
Let t0 ∈ Σ. Then there exists u such that
Lt0u = f in Ω
u = 0 on ∂Ω
For w ∈ C2,µ(Ω) there exists a unique uw such that
Lt0uw = (Lt0 − Lt)w + f in Ω
uw = 0 on ∂Ω
We conclude that
||uw ||C2,µ(Ω) ≤ C|t − t0| · ||w ||C2,µ(Ω) + C · ||f ||C2,µ(Ω)
and
||uw1 − uw2 ||C2,µ(Ω) ≤ C|t − t0| · ||w1 − w2||C2,µ(Ω)
If we choose t − t0| =: δ < 1/c , we see that the operator
T : C0,µ(Ω)→ C0,µ(Ω)
w 7→ uw
is a contradiction and therefore has a fixpoint which just means that (t0− δ, t0 + δ) ⊂ Σ, i.e
that Σ is open. Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.5 Estimates for Higher Derivatives311
10.5 Estimates for Higher Derivatives
u ∈ W 1,2(Ω) is a weak solution of −∆u = 0.
interior
interior estimates: B(x0, R) ⊂⊂ Ω
boundary estimates: x0 ∈ ∂Ω, transformations =⇒ ∂Ω flat near x0
−∆u = 0,∂
∂s=⇒ −∆uDsu = 0, vs := Dsu is a solution
Caccioppoli with vs
∫
B(x0,R2 )|Dvs |2 dx ≤ C
R2
∫
B(x0,R)
|vs |2 dx∫
B(x0,R2 )|DDsu|2 dx ≤ C
R2
∫
B(x0,R)
|Dsu|2 dx
=⇒ all 2nd derivatives are in L2(Ω) =⇒ u ∈ W 2,2(Ω)
To fully justify this, we replace derivatives with difference quotients.
Define uh = u(x + hes), es = (0, . . . , 1, . . . , 0). Suppose Ω′ ⊂⊂ Ω, with h small enough so
that uh is defined in Ω′. Take φ ∈ W 1,20 (Ω′) and compute
∫
Ω′Duh(x) ·Dφ(x) dx =
∫
Ω′Du(x + hes) ·Dφ(x) dx
=
∫
hes+Ω′⊆Ω
Du(x) ·Dφ(x − hes) dx
=
∫
Ω
Du(x) ·Dψ(x) dx,
where ψ(x) = φ(x − hes) ∈ W 1,20 (Ω). If u is a weak solution of −∆u = 0 in Ω, then uh is a
weak solution in Ω!
Define difference quotient ∆shu = u(x+hes)−u(x)h
∫
Ω′Duh(x) ·Dφ(x) dx = 0,
∫
Ω
Du(x) ·Dφ(x) dx = 0
=⇒∫
Ω
Duh(x)−Du(x)
hDφ(x) dx = 0 ∀φ ∈ W 1,2
0 (Ω)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
312Chapter 10: Linear Elliptic PDE
=⇒ ∆shu = uh−uh is a weak solution of −∆u = 0 in Ω′
Suppose that B(x0, 2R) ⊂ Ω′, h < R,
Caccioppoli inequality for ∆hsu:
∫
B(x0,R2 )|D∆shu|2 dx ≤ C
R2
∫
B(x0,R)
|∆shu|2 dx
(by Theorem 8.59) ≤ C
R2
∫
B(x0,R+h)
|Dsu|2 dx
≤ C
R2
∫
B(x0,2R)
|Dsu|2 dx
By Theorem 8.56, Dsu exists, and we can pass to the limit to get
∫
B(x0,R2 )|DDsu|2 dx ≤
C
R2
∫
B(x0,2R)
|Dsu|2 dx
Taking s = 1, . . . , n, we get
∫
B(x0,R2 )|Du|2 dx ≤ C
R2
∫
B(x0,2R)
|Du|2 dx
=⇒ W 1,2loc (Ω)
Now you can “boot-strap”; apply this argument to v = Dsu and get v ∈ W 1,2loc (Ω) =⇒
u ∈ W 3,2loc (Ω) =⇒ · =⇒ u ∈ W k,2
loc (Ω) ∀k =⇒ u ∈ C∞(Ω) by Sobolev embedding.
In general, the regularity of the coefficients determine how often you can differentiate your
equation.
In the boundary situation, this works in all tangential derivatives, i.e. DDsu ∈ L2(Ω),
s = 1, . . . , n − 1 (after reorienting the axises).
xn
x0 Rn−1
The only derivative missing is Dnnu. But from the original PDE we have
equation: Dnnu = −n−1∑
i=1
Di iu ∈ L2(Ω)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.6 Eigenfunction Expansions for Elliptic Operators313
10.6 Eigenfunction Expansions for Elliptic Operators
A : Rn → Rn (A is a n × n matrix)
A symmetric ⇐⇒ (Ax, y) = (x, Ay) ∀x, y ∈ Rn=⇒ ∃ an orthonormal basis of eigenvectors, ∃λi ∈ R such that Avi = λivi , where vi ∈Rn \ 0.
Unique expansion:
ν =∑
i
αivi , αi = (v , vi)
|ν|2 =∑
i
α2i
and
Av = A
(∑
i
αivi
)=∑
i
αiλivi
A diagonal in the eigenbasis.
Application: Solve the ODE y = Au, i.e., y(t) ∈ Rn with y(0) = y0. Explicit represen-
tation: y(t) =∑i αi(t)νi ,
=⇒∑
i
αi(t)νi =∑
i
αi(t)λiνi∑
i
α0i νi
=⇒αi(t) = λiαi(t), i = 1, . . . , n
αi(0) = α0i
Natural Generalization: H is a Hilbert Space, K : H → H compact and symmetric, i.e.,
(Kx, y) = (x,Ky) ∀x, y ∈ H.
Theorem 10.6.1 (Eigenvector expansion): H is a Hilbert Space, K symmetric, compact,
and K 6= 0. Then there exists an orthonormal system of eigenvectors ei , i ∈ N ⊆ N and
corresponding eigenvalues λi ∈ R such that
Kx =∑
i∈Nλi(x, ei)ei
If N is infinite, then λk → 0 as k →∞. Moreover,
H = N(k)⊕ spanei , i ∈ N
Definition 10.6.2: The elliptic operator L is said to be symmetric if L = L∗, i.e.
ai j = aj i , bi = ci a.e. x ∈ Ω
Theorem 10.6.3 (Eigenfunction expansion for elliptic operators): Suppose L is symmetric
and strictly elliptic. There there exists an orthonormal system of eigenfunctions, ui ∈ L2(Ω),
and eigenvalues, λi , with λi ≤ λi+1, with λi →∞ such that
Lui = λiui in Ω
ui = 0 on Ω
L2(Ω) = spanei , i ∈ N
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
314Chapter 10: Linear Elliptic PDE
Remark: Lui = λiui ∈ L2(Ω) =⇒ ui ∈ H2(Ω) by regularity theory if the coefficients are smooth,
then Lui = λiui ∈ H2(Ω) =⇒ ui ∈ H4(Ω), . . ., i.e. ui ∈ C∞(Ω) if the coefficients
are smooth.
Proof: As before Lσ is given by Lσ = Lu + σu is invertible for σ large enough, L−1σ is
compact as a mapping from L2(Ω)→ L2(Ω). Moreover, L−1σ u = 0 ⇐⇒ u = Lσ0 = 0, i.e.
N(L−1σ ) = 0.
By Theorem 9.13, ∃ui ∈ L2(Ω) such that L2(Ω) = spanui , i ∈ n. Moreover,
L−1σ u =
∑
i∈Nµi(u, ui)ui
Choose uj such that
L−1σ uj =
∑
i∈Nµi (uj , ui)︸ ︷︷ ︸
δi j
ui = µjuj , µj 6= 0
=⇒ Lσuj = 1µjuj
Lσuj =1
µjuj ⇐⇒ Luj + σuj =
1
µjuj ⇐⇒ Luj =
(1
µj− σ
)
︸ ︷︷ ︸λj
uj
Lσuj =1
µjuj ⇐⇒ 0 ≤ L(uj , uj) =
1
µj(uj , uj) =
1
µj
=⇒ µj > 0
=⇒ µj has zero as its only possible accumulation point.
=⇒ λj → +∞ as j →∞.
Proposition 10.6.4 (Weak lower semicontinuity of the norm): X is a Banach Space,
xj x in X, then
||x || ≤ limj→∞
inf ||xj ||
Proof: Take y ∈ X∗, then
|〈y , x〉| ← |〈y , xj〉|Holder≤ ||y ||X∗ ||xj ||X
=⇒ |〈y , x〉| ≤(
limj→∞
inf ||xj ||X)||y ||X∗
Hahn-Banach: For every x0 ∈ X, ∃y ∈ X∗ such that |〈y , x0〉| = ||x0||X and 〈y , z〉 ≤ ||z ||X ,
∀z ∈ X (see Rudin 3.3 and corollary). This obviously implies that ||y ||X∗ = 1. So, we pick
y∗ ∈ X∗ such that |〈y∗, x〉| = ||x ||X . Thus,
=⇒ ||x || = 〈y∗, x〉 ≤(
limj→∞
inf ||xj ||X)||y∗||X∗︸ ︷︷ ︸
=1
Theorem 10.6.5 (): L strictly elliptic, λi are eigenvalues. Then,
λ1 = minL(u, u) : u ∈ H10(Ω), ||u||L2(Ω) = 1
λk+1 = minL(u, u) : u ∈ H10(Ω), ||u||L2(Ω) = 1, (u, ui) = 0, i = 1, . . . , k
Define µ := infL(u, u) : ||u||L2(Ω) = 1Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.6 Eigenfunction Expansions for Elliptic Operators315
Proof:
1. inf = min. Choosing a minimizing sequence uj ∈ H10(Ω), ||uj ||L2(Ω) = 1 such that
L(uj , uj)→ µ
We know from the proof of Fredholm’s alternative
L(uj , uj) ≥λ
2
∫
Ω
|Duj |2 dx − C∫
Ω
|uj | dx︸ ︷︷ ︸
=1
=⇒ λ
2
∫
Ω
|Duj |2 dx ≤ L(uj , uj)︸ ︷︷ ︸→µ
+C <∞
=⇒ uj ∈ W 1,20 (Ω) bounded
W 1,2(Ω) reflexive =⇒ ∃ a subsequence ujk such that ujk u in W 1,2(Ω) (Banach-
(Banach-Alaoglu).
Since ujk u in L2(Ω) Sobolev→→ ujk → u strongly in L2(Ω)
(More precisely, ujk bounded in L2(Ω) =⇒ ujk bounded in W 1,2(Ω) =⇒ ujk has a sub-
sequence converging strongly to some u∗ in L2(Ω) by Sobolev embedding. Clearly, u∗
is also a weak limit of ujk . Thus, by the uniqueness of weak limits, we have u∗ = u a.e.)
Dujk Du in L2(Ω) weakly and ||u||L2(Ω) = 1 from strong convergence.
2. Show that L(u, u) ≤ µ (=⇒ L(u, u) = µ, u minimizer, inf = min). As before
Lσ(u, u) ≥ γ||u||2W 1,2(Ω)
for σ sufficiently large, i.e.
|||u|||2 = Lσ(u, u)
Lσ(u, u) defines a scalar product equivalent to the standard H1(Ω) scalar product,
||| · ||| defines a norm.
weak lower semicontinuity of the norm:
ujk u in W 1,2(Ω)
Lσ(u, u) = |||u|||2 ≤ limk→∞
inf |||ujk |||2
= limk→∞
inf(L(ujk , ujk)︸ ︷︷ ︸→µ
+σ ||ujk ||2L2(Ω)︸ ︷︷ ︸=1
)
= µ+ σ
=⇒ L(u, u) ≤ µ =⇒ u is the minimizer
3. µ is an eigenvalue, Lu = µu
Consider the variation:
ut =u + tv
||u + tv ||L2(Ω)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
316Chapter 10: Linear Elliptic PDE
Then t 7→ L(ut , ut) has a minimum at t = 0.
L(ut , ut) =1
||u + tv ||2L2(Ω)
(L(u, u) + tL(u, v) + tL(v , u)︸ ︷︷ ︸2tL(u,v) by sym.
+t2L(v , v))
d
dt
∣∣∣∣∣t=0
L(ut , ut) = 2L(u, v)− 2
(∫
Ω
(u + 0)v dx
)L(u, u)︸ ︷︷ ︸
=µ
= 0
where we have used
1∫Ω(u + tv)2 dx
=
[∫
Ω
(u + tv)2 dx
]−1
︸ ︷︷ ︸=1 @ t=0
d
dt
1∫Ω(u + tv)2 dx
= −1 ·[∫
Ω
(u + tv)2 dx
]−2
︸ ︷︷ ︸=1 @ t=0
·2 ·∫
Ω
(u + tv)v dx
=⇒ L(u, v) = µ(u, v) ∀v ∈ H10(Ω)
⇐⇒ Lu = µu, i.e., µ is an eigenvalue.
4. µ = λ1
Suppose µ > λ1, ∃u1 ∈ H10(Ω) with ||u1||L2(Ω) = 1 such that
Lu1 = λ1u1
⇐⇒ L(u1, u1) = λ1(u1, u1)
= λ1 < µ = inf||u||L2(Ω)=1
L(u, u)
Thus, we have a contradiction.
Simple situation: ai j smooth, bi , ci , d = 0.
Proposition 10.6.6 (): L strictly elliptic, then the smallest eigenvalue λ1 (also called the
principle eigenvalue) is simple, i.e., the dimension of the corresponding eigenspace is one.
Proof: Following the last theorem: variational characterization of λ1,
minL(u, u) = λ1, L(u, u) = λ1 ⇐⇒ Lu1 = λ1u1
Key Step: Suppose that u solves
Lu = λ1u in Ω
u = 0 on ∂Ω
Then either u > 0 or u < 0 in Ω.
Suppose that ||u||L2(Ω) = 1 and define u+ := max(u, 0), and u− := −min(u, 0), i.e.,
u = u+ − u−. Then u+, u− ∈ W 1,20 (Ω), and
Du+ =
Du a.e. on u > 0
0 else
Du− =
−Du a.e. on u < 0
0 elseGregory T. von Nessi
Versio
n::3
11
02
01
1
10.7 Applications to Parabolic Equations of 2nd Order317
Thus, we see that L(u+, u−) = 0 since∫
Ω u+u− dx = 0,
∫ΩDu
+ ·Du− dx = 0, . . ..
=⇒ λ1 = L(u, u) = L(u+ − u−, u+ − u−)
= L(u+, u+)− L(u+, u−)− L+ L(u−, u−)
= L(u+, u+) + L(u−, u−)
≥ λ1||u+||2L2(Ω) + λ1||u−||2L2(Ω) = λ1||u||2L2(Ω) = λ1
=⇒ so we have equality throughout,
L(u+, u+) = λ1||u+||2L2(Ω), L(u−, u−) = λ1||u−||2L2(Ω)
=⇒ Lu± = λ1u±, ,u± ∈ W 1,2
0 (Ω) since every minimizer is a solution of the eigen-problem.
Coefficients being smooth =⇒ u± smooth, i.e. u± ∈ C∞(Ω). In particular, Lu+ = λ1u+ ≥ 0
(λ1 ≥ 0).
=⇒ u+ is a supersolution of the elliptic equation.
The strong maximum principle says that either u+ ≡ 0 or u+ > 0 in Ω. Analogously,
either u− ≡ 0, or u− < 0 in Ω.
=⇒ either u = u+ or u = u−, i.e. either u > 0 or u < 0 in Ω.
Suppose now that u and u are solutions of Lu = λ1u in Ω. u, u ∈ W 1,20 (Ω) =⇒ u and
u have a sign, and we can choose γ such that∫
Ω
(u + γu) dx = 0
Since the original equation is linear, u+γu is also a solution =⇒ u+γu = 0 (otherwise u+γu
would have a sign) =⇒ u = −γu =⇒ all solutions are multiples of u =⇒ dim(Eig(λ1)) = 1.
Other argument: If dim(Eig(λ1)) > 1, ∃ u1, u2 in the eigenspace such that
∫
Ω
u1u2 dx = 0
which is a contradiction.
10.7 Applications to Parabolic Equations of 2nd Order
General assumption, Ω ⊂ Rn open and bounded, ΩT = Ω× (0, T ] solve the initial/boundary
value problem for u = u(x, t)
ut + Lu = f on ΩT
u = 0 on ∂Ω× [0, T ]
u = g on Ω× t = 0
L strictly elliptic differential operator, Lu = −Di(ai jDju)+biDiu+cu with ai j , bi , c ∈ L∞(Ω).
Definition 10.7.1: The operator ∂t + L is uniformly parabolic, if ai jξiξj ≥ λ|ξ|2 ∀ξ ∈ Rn,
∀(x, t) ∈ ΩT , with λ > 0.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
318Chapter 10: Linear Elliptic PDE
Standard example: Heat equation, L = −∆, ut − ∆u = f .
Suppose that f , g ∈ L2(Ω), and define
L(u, v ; t) =
∫
Ω
ai jDjuDiv + biDiu · v + uv dx
∀v ∈ H10(Ω), t ∈ [0, T ].
New point of view: parabolic PDE ↔ ODE in Hilbert Space.
u(x, t) U : [0, T ]→ H10(Ω)
(U(t))(x) = u(x, t) x ∈ Ω, t ∈ (0, T ]
analogously: f (x, t) F : [0, T ]→ L2(Ω)
(F (t))(x) = f (x, t) x ∈ Ω, t ∈ (0, T ]
Suppose we have a smooth solution of the PDE, multiply by v ∈ W 1,20 (Ω) and integrate
parts in∫
Ω dx ∫
Ω
U ′(t) · v dx + L(U, v ; t) =
∫
Ω
F (t) · v dx
What is ut? PDE ut + Lu = f says that
ut = f − Lu = f︸︷︷︸L2(Ω)
+Di (ai jDju)︸ ︷︷ ︸L2(Ω)
− biDiu︸ ︷︷ ︸L2(Ω)
− cu︸︷︷︸L2(Ω)
=⇒ ut = g0 +Digi ∈ H−1(Ω),
where g0 = f − biDiu − cu and gi = ai jDju.
This suggests that∫
Ω U′(t) · v dx has to be interpreted as
〈U ′(t), v〉 = duality in(H−1(Ω), H1
0(Ω))
for y(t) = αi(t)vi .
Lu = −Di(ai jDju) + biDiu + cu
equation:ut + Lu = f in Ω× (0, T ]
u = 0 on ∂Ω× [0, t]
u = g on Ω× t = 0parabolic if L elliptic:
∑ai jξiξj ≥ λ|ξ|2, λ > 0.
ut = f − Lu = f +Di(ai jDju)− biDiu − cu ∈ H−1(Ω)
(ut , v) duality between H−1 and H10
parabolic PDE ↔ ODE with BS-valued functions
U(t)(x) = u(x, t) ∀x ∈ Ω, t ∈ [0, T ]
L(u, v ; t) =
∫
Ω
(ai jDju ·Div + biDiu + cuv) dx
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.7 Applications to Parabolic Equations of 2nd Order319
10.7.1 Outline of Evan’s Approach
Definition 10.7.2: u ∈ L2(0, T ;H10(Ω)) with u′ ∈ L2(0, T ;H−1) is a weak solution of the
IBVP if
i.) 〈U ′, v〉+ L(U, v ; t) = (F, v)L2(Ω) ∀v ∈ H10(Ω), a.e. t ∈ (0, T )
ii.) U(0) = g in Ω
Remark: u ∈ L2(0, T ;H10(Ω)), u′ ∈ L2(0, T ;H−1) then C0(0, T ;L2) and thus U(0) is
well defined.
General framework: Calculus in abstract spaces.
Definition 10.7.3: 1) Lp(0, T ;X) is the space of all measurable functions u : [0, 1]→ X
with
||u||Lp(0,T ;X) =
(∫ T
0
||u(t)||pX) 1
p
<∞, 1 ≤ p <∞
and
||u||L∞(0,T ;X) = ess sup0≤t≤T
||u(t)||X <∞
2) C0([0, T ];X) is the space of all continuous functions u : [0, T ]→ X with
||u||C0([0,T ];X) = max0≤t≤T
||u(t)||X <∞
3) u ∈ L1(0, T ;X), then v ∈ L1(0, T ;X) is the weak derivative of u
∫ T
0
φ′(t)u(t) dt = −∫ T
0
φ(t)v(t) dt ∀φ ∈ C∞c (0, T )
4) The Sobolev space W 1,p(0, T ;X) is the space of all u such that u, u′ ∈ Lp(0, T ;X)
||u||W 1,p(0,T ;X) =
(∫ T
0
||u(t)||pX + ||u′(t)||pX dt) 1
p
for 1 ≤ p <∞ and
||u||w1,∞(0,T ;X) = ess sup0≤t≤T
(||u(t)||X + ||u′(t)||X)
Theorem 10.7.4 (): u ∈ W 1,p(0, T ;X), 1 ≤ p ≤ ∞. Then
i.) u ∈ C0([0, T ];X) i.e. there exists a continuous representative.
ii.)
u(t) = u(s) +
∫ t
x
u′(τ) dτ 0 ≤ s ≤ t ≤ T
iii.)
max0≤t≤T
||u(t))|| = C · ||u||W 1,p(0,T ;X)
X = H10, X = H−1
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
320Chapter 10: Linear Elliptic PDE
10.7.2 Evan’s Approach to Existence
Galerkin method: Solve your equation in a finite dimensional space, typically the span of the
first k eigenfunctions of the elliptic operator; solution uk .
Key: energy (a priori) estimates of the form
max0≤t≤T
||uk(t)||L2(Ω) + ||uk ||L2(0,T ;H10) + ||u′k ||L2(0,T ;H−1)
≤ C(||f ||L2(0,T ;L2) + ||g||L2(Ω)
)
with C independent of k
Crucial: (weak) compactness bounded sequences in L2(0, T ;H10) and L2(0, T ;H−1) con-
tain weakly convergent subsequences: ukl u, u′kl v
Then prove that v = u′, and that u is a weak solution of the PDE.
Galerkin method: More formal construction of solutions. y = Ay , y(0) = y0. A symmetric
and positive definite =⇒ exists basis of eigenvectors ui with eigenvalues λi
express y(t) =
n∑
i=1
αi(t)ui , y0 =
n∑
i=1
α0i ui
=⇒n∑
i=1
αi(t)ui =
n∑
i=1
λiαi(t)ui
∣∣∣∣∣ · uj
=⇒ αi(t) = λiαi(t), αi(0) = α0i , αi(t) = α0
i eλ1t
y(t) =
n∑
i=1
α0i eλi t
Same idea for A = ∆ = −L, model for elliptic operators L.
Want to solve:ut − ∆u = 0 in Ω× (0, T )
u = 0 on ∂Ω× (0, T )
u(x, 0) = u0 in Ω
Let ui be the orthonormal basis of eigenfunctions
−∆ui = λiui
and try to find
u(x, t) =
∞∑
i=1
αi(t)ui(t)
ut = ∆u :
∞∑
i=1
αi(t)ui(x) =
∞∑
i=1
(−λi)αi(t)ui(x)
∣∣∣∣∣ · uj ,∫dx
=⇒ αi(t) = −λiαi(t), αi(0) = α0i ,
∞∑
i=1
α0i ui(x) = u0(x)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.7 Applications to Parabolic Equations of 2nd Order321
Formal solution:
u(x, t) =
∞∑
i=1
e−λi tα0i ui(x)
Justification via weak solutions:
Multiply by test function, integrate by parts, choose φ ∈ C∞c ([0, T )×Ω)
−∫ T
0
u · φt dxdt −∫
Ω
u0(x)φ(x, 0) dx = −∫ T
0
∫
Ω
Du ·Dφ dxdt
=
∫ T
0
∫
Ω
u · ∆φ dxdt
Expand the initial data
u0(x) =
∞∑
i=1
α0i ui(x)
and define
u(k)j (x) =
k∑
i=1
α0i uu(x)
Then u(k)0 → u0 in L2.
Then
u(k)(x) =
k∑
i=1
α0i e−λi tui(x) is a weak solution
of the IBVP with u0 replaced by u(k)0 . Galerkin method: approximate by u(k) ∈spanu1, . . . , uk
Moreover:
||u(k)(t)− u(t)||L2(Ω) =
∣∣∣∣∣∣
∣∣∣∣∣∣
∞∑
k+1
α0i e−λi t︸ ︷︷ ︸≤1
ui
∣∣∣∣∣∣
∣∣∣∣∣∣L2(Ω)
≤( ∞∑
i=k+1
(α0i )2
) 12
→ 0 as k →∞
=⇒ uk(t)→ u(t) uniformly in t as k →∞
−∫ T
0
∫
Ω
u(k)φt dxdt −∫
Ω
u(k)0 φ(x, 0) dx =
∫ T
0
∫
Ω
u(k)∆φ dxdt
=⇒ u is a weak solution (as k →∞)
Representation:
u(x, t) =
∞∑
i=1
α0i e−λi tui(x)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
322Chapter 10: Linear Elliptic PDE
gives good estimates.
||u(t)||L2(Ω) =
∣∣∣∣∣
∣∣∣∣∣∞∑
i=1
α0i e−λi tui(x)
∣∣∣∣∣
∣∣∣∣∣L2(Ω)
=
( ∞∑
i=1
(α0i )2(e−λi t
)2
) 12
≤ e−λ1t
( ∞∑
i=1
(α0i )2
) 12
= e−λ1t ||u0||L2(Ω)
=⇒ ||u(t)||L2(Ω) ≤ ||u0||L2(Ω)e−λ1t
||∆u(t)||L2(Ω) =
∣∣∣∣∣
∣∣∣∣∣∞∑
i=1
(α0i e−λi t(−λi)ui)
∣∣∣∣∣
∣∣∣∣∣L2(Ω)
=1
t
∞∑
i=1
(α0i t · λi︸ ︷︷ ︸
x
e−λi t)2
12
(xe−x ≤ C, x ≥ 0)
≤ C
t
( ∞∑
i=1
(α0i )2
) 12
≤ C
t||u0||L2(Ω)
Theorem 10.7.5 (): L symmetric and strictly elliptic, then the IBVP
ut + Lu = 0 in Ω× (0, T )
u = 0 on ∂Ω× (0, T )
u = u0 on Ω× t = 0
has for all u0 ∈ L2(Ω) the solution
u(x, t) =
∞∑
i=1
e−λi tα0i ui(x)
where ui is a basis of orthonormal eigenfunctions of L in L2(Ω) with corresponding eigen-
value λ1 ≤ λ2 ≤ · · ·
10.8 Neumann Boundary Conditions
Neumann ↔ no flux
F (x, t) = heat flux = −κ(x)Du(x, t) Fourier’s law of cooling
Du · ν = 0 ⇐⇒ F (x, t) · ν = 0 no heat flux out of Ω
Simple model problem:
−div (a(x)Du) + c(x)u = f in Ω
(a(x)Du) · ν = 0 on ∂ΩGregory T. von Nessi
Versio
n::3
11
02
01
1
10.8 Neumann Boundary Conditions323
a and c are smooth, f ∈ L2 (f ∈ H−1)
Weak formulation of the problem: multiply by v ∈ C∞(Ω) ∩H1(Ω) and integrate in Ω,∫
Ω
[−div (a(x)Du) + c(x)u] v dx =
∫
Ω
f v dx (10.25)
=
∫
Ω
(a(x)Du)︸ ︷︷ ︸=0
·νv dx +
∫
Ω
a(x)Du ·Dv + c(x)uv dx
=⇒∫
Ω
[a(x)Du ·Dv + c(x)uv ] dx =
∫
Ω
f v dx ∀v ∈ H1 (10.26)
Definition 10.8.1: A function u ∈ H(Ω) is said to be a weak solution of the Neumann
problem if (10.26) holds.
Remark: (a(x)Du) · ν = 0 on ∂Ω
Subtle issue: a(x)Du seems to be only an L2 function and is not immediately clear how to
define Du on ∂Ω. However: −div(a(x)Du) = −c(x) + f ∈ L2. Turns out that a(x)Du ∈L2(div)
Proposition 10.8.2 (): Suppose that u is a weak solution and suppose that u ∈ C2(Ω) ∩C1(Ω). Then u is a solution of the Neumann problem.
Proof: Use (10.26) first for V ∈ C∞c (Ω). Reverse the integration by parts (10.25) to get
∫
Ω
[−div (a(x)Du) + c(x)u − f ]v dx = 0 ∀v ∈ C∞c (Ω)
=⇒ −div(a(x)Du) + cu = f in Ω, i.e. the PDE Holds.
Now take arbitrary v ∈ C∞(Ω)∫
Ω
[a(x)Du ·Dv + c(x)v) dx =
∫
Ω
f v dx
=
∫
∂Ω
[a(x)Du · ν)v dS +
∫
Ω
[−div (a(x)Du) + c(x)u]v dx
︸ ︷︷ ︸∫Ω f v dx
=⇒∫
∂Ω
[a(x)Du · ν)v dS = 0 ∀v ∈ C∞(Ω)
=⇒ a(x)Du · ν = 0 on ∂Ω
Remark: The Neumann BC is a “natural” BC in the sense that it follows from the weak
formulation. It’s not enforced as e.g. the Dirichlet condition u = 0 on ∂Ω by seeking
u ∈ H10(Ω).
Theorem 10.8.3 (): Suppose a, c ∈ C(Ω) and that there exists a1, a2, c1, c2 > 0 such that
c1 ≤ c(x) ≤ c2, a1 ≤ a(x) ≤ a2 (i.e., the equation is elliptic). Then there exists for all
f ∈ L2(Ω) a unique weak solution u of
− div (a(x)Du) + cu = f in Ω
(a(x)Du) · ν = 0 on ∂Ω
and ||u||H1(Ω) ≤ C||f ||L2(Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
324Chapter 10: Linear Elliptic PDE
Remark: The same assertion holds with f a linear function on H1(Ω), i.e. f ∈ H−1.
Proof: Weak solution meant
∫
Ω
[a(x)Du ·Dv + c(x)uv ] dx =
∫
Ω
f v dx ∀v ∈ H1(Ω)
linear functional F (v) =∫
Ω f v dx .
Bilinear form:
B(u, v) =
∫
Ω
[a(x)Du ·Dv + c(x)uv ] dx
our Hilbert space is clearly H1(Ω). The assertion of the theorem follows from Lax-Milgram
if:
B(u, v) =
∫
Ω
[a(x)Du ·Dv + c(x)uv ] dx
≤ a2
∫
Ω
|Du| · |Dv | dx + c2
∫
Ω
|u| · |v | dx
≤ a2||Du||L2(Ω)||Dv ||L2(Ω) + c2||u||L2(Ω)||v ||L2(Ω)
≤ C||u||H1(Ω)||v ||H1(Ω)
=⇒ B is bounded.
B(u, u) =
∫
Ω
(a(x)|Du|2 + c(x)|u|2) dx
≥ min(a1, c1)||u||2h1(Ω)
i.e., B is coercive. Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.8 Neumann Boundary Conditions325
Remarks:
1) We need c1 > 0 to get the L2-norm in the estimate, there is no Poincare-inequality in
H1
Poincare:
∫
Ω
|u|2 dx ≤ Cp∫
Ω
|Du|2 dx
and this fails automatically if the space contains non-zero constants.
2) This theorem does not allow us to solve−∆u = f in Ω
Du · ν = 0 on ∂Ω
∫
Ω
f dx =
∫
Ω
−∆u dx
I.P.= −
∫
∂Ω
Du · ν dx = 0
So, we have the constraint that∫
Ω f dx = 0.
10.8.1 Fredholm’s Alternative and Weak Solutions of the Neumann
Problem
−∆u + u + λu = f in Ω
Du · ν = 0 on ∂Ω
(i.e., a(x) = 1, c(x) = 1, −∆u + u = f has a unique solution with Neumann BCs)
Operator equation:
(−∆ + I + λ · I)u = f
∣∣∣∣∣ · (−∆ + I)−1
⇐⇒ [I + (−∆ + I)−1λ · I]u = (−∆ + I)−1(λ · I)⇐⇒ (I − T )u = f in H1(Ω), T = −(−∆ + I)−1(λ · I)
Fredholm applies to compact perturbations of I. We want to show T = (−∆ + I)−1(λ · I) is
compact.
(−∆ + I)−1 : H−1 → H1
λ · I : H1 → H−1, I = I1I0where I0 : H1(Ω) ⊂⊂−→ L2(Ω) for ∂Ω smooth
I1 : L2(Ω) ⊂−→ H−1(Ω)
=⇒ I compact =⇒ T compact.
Fredholm alternative: Either unique solution for all g or non-trivial solution of the homo-
geneous equation, (I − T )u = 0
⇐⇒ [I + (−∆ + I)−1λ · I]u = 0
⇐⇒ [(−∆ + I) + λ · I]u = 0
⇐⇒ [−∆ + (λ+ 1)I]u = 0
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
326Chapter 10: Linear Elliptic PDE
Special case λ = −1 has non-trivial solutions, e.g. u ≡ constant.
Solvability criterion: We can solve if we in N⊥((I − T )∗) = N⊥(I − T ) since T = T ∗.Non-trivial solutions for λ = −1 satisfy
−∆u = 0 in Ω (in the weak sense)
Du · ν = 0 on ∂Ω
⇐⇒∫
Ω
Du ·Dv dx = 0 ∀v ∈ H1(Ω)
u = v :
∫
Ω
|Du|2 dx = 0
=⇒ u ≡constant (if Ω is connected)
=⇒ Null-space of adjoint operator is u ≡ constant=⇒ perpendicular to this means
∫Ω f dx = 0.
There is only uniqueness only up to constants.
Remark: Eigenfunctions of −∆ with Neumann BC, i.e. solutions of
−∆u = λu, Du · ν = 0
then λ = 0 is the smallest eigenvalue with a 1D eigenspace, the space of constant functions.
10.8.2 Fredholm Alternative Review
∆u = f in Ω
Du · ν = 0 on ∂Ω
H1 I1⊂−→L2 I0⊂−→H−1
I2 = I0I1
L = −∆u + u, −∆u + u + λu = f
L(u, v) =
∫
Ω
(Du ·Dv + vu) dx
L∗ : 〈L∗u, v〉 = L∗(u, v) = L(v , u) = 〈Lu, v〉
Here L = L∗, (Lu = (Diai jDju) L∗ has coefficients ai j)
Operator equation: u + L−1(λI2u) = L−1F in H1
As for Dirichlet conditions, it’s convenient to write the equation for L2:
I1u + I1L−1(λI2)u = I1L
−1F (u = I1u, F = I1L−1F )
⇐⇒ u + (I1L−1λI0)︸ ︷︷ ︸ u = F
cmpct. since I1 is cmpct.
This last equation is the same equation as for the Dirichlet problem, but L−1 has Neumann
conditions.
Application of Fredholm Alternative:Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.9 Semigroup Theory327
i.) Solution for all right hand sides.
ii.) Non-trivial solutions of the homogeneous equations.
iii.) If ii.) holds, solutions exists if F is perpendicular to the kernel of the adjoint.
What happens in ii.)? Check: if u is a solves u + (I1L−1λI0)u = 0 then u ∈ L−1(I0u) ∈ H1
solves u + L−1(λI2)u = 0 in H1 ⇐⇒ −∆u + u + λu = 0.
Now consider T to be our compact operator, defined by T = −I1L−1λI0.
Solvability of the original problem
(1) Check that T ∗ = −λI1(L∗)−1I0 = −λI1L−1I0
(2) F has to be orthogonal to N((I − T )∗) = N(I − T ) u non-trivial solution of u +
(I1L−1λI0)u = 0 =⇒ u = L−1I0u solves u + (L−1λI2)u = 0 ⇐⇒ u = constant (in
our special case λ = −1) ⇐⇒ u = constant.
(3) Solvability: then if F = I1L−1F ⊥ constants (this again is for the special case of
λ = −1).
⇐⇒ (F , v)L2 = 0 ∀v ∈ constant⇐⇒ · · · ⇐⇒ 〈F, v〉 = 0 ∀v ∈ constant
If F (v) =∫
Ω f v dx , if we solve −∆u = f in L2, then
∫
Ω
f v dx = 0 ∀v ∈ constant
=⇒∫
Ω
f dx = 0
So we conclude that
∆u = f in Ω
Du · ν = 0 on ∂Ω
has a solutions
⇐⇒∫
Ω
f dx = 0
10.9 Semigroup Theory
See [Hen81, Paz83] for additional information on this subject.
Idea: PDE ↔ ODE in a BS y = ay which we know to have a solution of y(t) = y0eat .
Now consider y = Ay , where A is a n × n matrix which is also symmetric
y(t) = eAty0, eAt =
∞∑
n=0
(At)n
n!= QetΛQT
if A = QΛQT , Λ diagonal
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
328Chapter 10: Linear Elliptic PDE
Goal: Solve ut = Au (= ∆u) in a similar way by saying
u(t) = S(t)u0 (S(t) = eAt)
where S(t) : X → X is a bounded, linear operator over a suitable Banach space, with
• S(0)u = u, ∀u ∈ X
• S(t + s)u = S(t)S(s)u
• t 7→ S(t)u continuous ∀u ∈ X, t ∈ [0,∞)
Definition 10.9.1: i.) A is a family of bounded and linear maps S(t)t≥0 with S(t) :
X → X, X a BS, is called a semigroup (meaning that S has a group structure but no
inverses)
(1) S(0)u = u, ∀u ∈ X(2) S(t + s)u = S(t)S(s)u = S(s)S(t)u, ∀s, t > 0, u ∈ X(3) t 7→ S(t)u continuous as a mapping [0,∞)→ X, ∀u ∈ X
ii.) S(t)t≥0 is called a contraction semigroup if ||S(t)||X≤ 1, ∀t ∈ [0,∞)
Remark: It becomes clear that S−1 may not exist if one consider the reverse heat equation.
Solutions with smooth initial data will become very bad in finite time.
Definition 10.9.2: S(t)t≥0 semigroup, then define
D(A) =
u ∈ X : lim
t0+
S(t)u − ut
exists in X
,
where
Au = limt0+
S(t)u − ut
u ∈ D(A).
A : D(A)→ X is called the infinitesimal generator of S(t)t≥0, D is called its domain.
General assumption: S(t)t≥0 is a contraction semigroup
etA = QetΛQT , Λ diagonal
etΛ =
etλ1 0
. . .
0 etλn
with all λi ≤ 0,
where we recall that
etA =
∞∑
n=0
tn
n!An,
d
dtetA = AetA
Theorem 10.9.3 (): Let u ∈ D(A)
i.) S(t)u ∈ D(A), t ≥ 0
ii.) A · S(t)u = S(t)Au
iii.) t 7→ S(t)u is differentiable for t > 0Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.9 Semigroup Theory329
iv.) ddtS(t)u = A · S(t)u = S(t)Au where the last equality is due to ii.).
Proof:
v ∈ D(A) ⇐⇒ lims0+S(s)u−u
s exists
S(t)u ∈ D(A) ⇐⇒ lims0+S(s)S(t)u−S(t)u
s exists
= lims0+
S(t)S(s)u − S(t)u
s(by (2) in the def. of semigroup)
= S(t) · lims0+
S(s)u − us
= S(t)Au (by def. of A)
=⇒ the limit exists, i.e. S(t) ∈ D(A), and thus
lims0+
S(t)S(s)u − S(t)u
s= A · S(t)u = S(t)Au
=⇒ i.) and ii.) are proved.
iii.) indicates that backward and forward difference quotients exists. So take u ∈ D(A),
h > 0, t > 0
limh0+
[S(t)u − S(t − h)u
h− S(t)Au
]
= limh0+
[S(t − h)S(h)u − S(t − h)u
h− S(t)Au
]
= limh0+
[S(t − h)
S(h)u − uh
− S(t)Au
]
= limh0+
[S(t − h)︸ ︷︷ ︸||S(t−h)||
X≤1
(S(h)u − u
h− Au
)
︸ ︷︷ ︸||·||
X≤∣∣∣∣∣∣ S(h)u−u
u−Au
∣∣∣∣∣∣X︸ ︷︷ ︸→0 since u∈D(A)
+ (S(t − h)Au − S(t)Au)︸ ︷︷ ︸=(S(t−h)−S(t))Au︸ ︷︷ ︸
→0 by contin. of t 7→ S(t)u
]
=⇒ limit exists and
limh0+
S(t)u − S(t − h)u
h= S(t)Au
Analogous argument for forwards quotient; the limit is the same and
d
dtS(t)u = S(t)Au
= A · S(t)u
Remark: S(t) being continuous =⇒ ddtS(t) is continuous. So, S(t) is continuously differen-
tiable.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
330Chapter 10: Linear Elliptic PDE
Theorem 10.9.4 (Properties of infinitesimal generators): i.) For each u ∈ X,
limt0+
1
t
∫ t
0
G(s)u ds = u.
ii.) D(A) is dense in X
iii.) A is a closed operator, i.e. if uk ∈ D(A) with uk → u and Auk → v , then u ∈ D(A)
and v = Au
Proof:
i.): Take u ∈ X. By the continuity of G(·)u we have limt0+ ||G(t)u−u||X
= limt0+ ||G(t)u−
G(0)u||X
= 0. Conclusion i.) follows since u =1
t
∫ t
0
u ds.
ii.): Take u ∈ X and define
ut =
∫ t
0
S(s)u ds,
then
ut
t=
1
t
∫ t
0
S(s)u ds → S(0)u = u as t 0+.
ii.) follows if ut ∈ D(A), i.e.
S(r)ut − utr
=1
r
∫ t
0
S(r)S(s)u ds −∫ t
0
S(s)u ds
(semigroup prop.) =1
r
∫ t
0
S(r + s)u ds − 1
r
∫ t
0
S(s)u ds
=1
r
∫ t+r
r
S(s)u ds − 1
r
∫ t
0
S(s)u ds
=1
r
∫ t+r
t
S(s)u ds ± 1
r
∫ t
r
S(s)u ds
−1
r
∫ r
0
S(s)u ds ∓ 1
r
∫ t
r
S(s)u ds
=1
r
∫ t+r
t
S(s)u ds − 1
r
∫ r
0
S(s)u ds
taking the limit of both sides
limr0+
S(r)ut − utr
= S(t)u − u by continuity of S(t)
=⇒ ut ∈ D(A), Aut = S(t)u − u.
iii.): (proof of A being closed).
Take uk ∈ D(A) with uk → u and Auk → v in X. We need to show that u ∈ D(A)
and v = Au. So consider,
S(t)uk − uk =
∫ t
0
d
dt ′S(t ′)uk dt
′
=
∫ t
0
S(s) Auk︸︷︷︸→v
ds.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.9 Semigroup Theory331
Taking k →∞, we have
S(t)u − u =
∫ t
0
S(s)v ds
=⇒ limt0+
S(t)u − ut
= limt0+
1
t
∫ t
0
S(s)v ds = v
=⇒ Au = v with u ∈ D(A).
Definition 10.9.5: i.) We say λ ∈ ρ(A), the resolvent set of A, if the operator λI − A :
D(A)→ X is one-to-one and onto.
ii.) If λ ∈ ρ(a), then the resolvent operator Rλ : X → X is defined by Rλ = (λI − A)−1
Remark: The closed graph theorem says that if A : X → Y is a closed and linear operator,
then A is bounded. It turns out that, Rλ is closed, thus Rλ is therefore bounded.
Theorem 10.9.6 (Properties of the resolvent): i.) If λ, µ ∈ ρ(A), then
Rλ − Rµ = (µ− λ)RλRµ (resolvent identity)
which immediately implies
RλRµ = RµRλ
ii.) If λ > 0, then λ ∈ ρ(A) and
Rλu =
∫ ∞
0
e−λtS(t)u dt
and hence
||Rλ||X =1
λ(from estimating the integral)
Proof:
i.):
Rλ = Rλ(µI − A)Rµ
= Rλ((µ− λ)I + (λI − A))Rµ
= (µ− λ)RλRµ + Rλ(λI − A)︸ ︷︷ ︸I
Rµ
=⇒ Rλ − Rµ = (µ− λ)RλRµ=⇒ RλRµ = RµRλ
ii.): S(t) contraction semigroup, ||S(t)||X≤ 1. Define
Rλ =
∫ ∞
0
e−λtS(t) dt
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
332Chapter 10: Linear Elliptic PDE
which clearly exists. Now fix h > 0, u ∈ X and compute
S(h)Rλu − Rλuh
=1
h
∫ ∞
0
[e−λtS(t + h)u − S(t)u] dt
=1
h
∫ ∞
h
e−λ(t−h)S(t)u dt − 1
h
∫ ∞
0
e−λtS(t)u dt
= −1
h
∫ h
0
e−λ(t−h)S(t)u dt +1
h
∫ ∞
0
(e−λ(t−h) − e−λt)S(t)u dt
= −eλh 1
h
∫ h
0
e−λtS(t)u dt
︸ ︷︷ ︸→−u
+eλh − 1
h︸ ︷︷ ︸→λ
∫ ∞
0
e−λtS(t)u dt
︸ ︷︷ ︸Rλu
.
Collecting terms we have
limh0+
S(h)Rλu − Rλuh
= −u + λRλu
=⇒ Rλu ∈ D(A), and
ARλu = −u + λRλu
⇐⇒ u = (λI − A)Rλu ∀x (10.27)
If u ∈ D(A), then
ARλu = A
∫ ∞
0
e−λtS(t) dt
(exer. in Evans) =
∫ ∞
0
e−λtA · S(t)u dt
=
∫ ∞
0
e−λtS(t) Au︸︷︷︸ dt
= Rλ(Au)
⇐⇒ Rλ(λI − A)u = λRλu − RλAu = λRλu − ARλu= (λI − A)Rλu
= u (by (10.27))
Now we need to prove the following.
Claim: λI − A is one-to-one and onto
• one-to-one: Assume (λI − A)u = (λI − A)v , u 6= v
(λI − A)(u − v) = 0
=⇒ Rλ(λI − A)(u − v) = 0,
u, v ∈ D(A) =⇒ u − v ∈ D(A)
=⇒ Rλ(λI − A)(u − v) = u − v 6= 0 a contradiction.
• onto: (λI − A)Rλu = u. Rλu = w solves (λI − A)w = u ⇐⇒ (λI − A) :
D(A)→ X is onto.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.9 Semigroup Theory333
So we have (λI − A) is one-to-one and onto if λ ∈ ρ(A) and λ > 0. We also have
Rλ = (λI − A)−1 = Rλ.
Theorem 10.9.7 (Hille, Yoshida): If A closed and densely defined, then A is infinitesimal
generator of a contraction semigroup ⇐⇒ (0,∞) ⊂ ρ(A) and ||Rλ||X ≤ 1λ , λ > 0.
Proof:
=⇒) This is immediately obvious from the last part of Theorem 10.6.
⇐=) We must build a contraction semigroup with A as its generator. For this fix λ > 0 and
devine
Aλ := −λI + λ2Rλ = λARλ. (10.28)
The operator Aλ is a kind of regularized approximation to A.
Step 1. Claim:
Aλu → Au as λ→∞ (u ∈ D(A)). (10.29)
Indeed, since λRλu − u = ARλu = RλAu,
||λRλu − u|| ≤ ||Rλ|| · ||Au|| ≤1
λ||Au|| → 0.
Thus λRλu → u as λ → ∞ if u ∈ D(A). But since ||λRλ|| ≤ 1 and D(A) is dense,
we deduce then as well
λRλu → u as λ→∞, ∀u ∈ X. (10.30)
Now if u ∈ D(A), then
Aλu = λARλu = λRλAu.
Step 2. Next, define
Sλ(t) := etAλ = e−λteλ2tRλ = e−λt
∞∑
k=0
(λ2t)k
k!Rkλ.
Observe that since ||Rλ|| ≤ λ−1,
||Sλ(t)|| ≤ e−λt∞∑
k=0
λ2ktk
k!||Rλ||k ≤ e−λt
∞∑
k=0
λktk
k!= 1.
Consequently Sλ(t)t≥0 is a contrction semigroup, and it is easy to check its generator
is Aλ, with D(Aλ) = X.
Step 3. Let λ, µ > 0. Since RλRµ = RµRλ, we see AλAµ = AµAλ, and so
AµSλ(t) = Sλ(t)Aµ for each t > 0.
Tus if u ∈ D(A), we can compute
Sλ(t)u − Sµ(t)u =
∫ t
0
d
ds[Sµ(t − s)Sλ(s)u] ds
=
∫ t
0
Sµ(t − s)Sλ(s)(Aλu − Aµu) ds,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
334Chapter 10: Linear Elliptic PDE
because ddtSλ(t)u = AλSλ(t)u = Sλ(t)Aλu. Consequently, ||Sλ(t)u − Sµ(t)u|| ≤
t||Aλu − Aµu|| → 0 as λ, µ→∞. Hence
S(t)u := limλ→∞
Sλ(t)u exists for each t ≥ 0, u ∈ D(A). (10.31)
As ||Sλ(t)|| ≤ 1, the limit (10.31) in fact exists for all u ∈ X, uniformly for t in com-
pact subsets of [0,∞). It is now straightforward to verify S(t)t≥0 is a contraction
semigroup on X.
Step 4. It remains to show A is the generator of S(t)t≥0. Write B to denote this generator.
Now
Sλ(t)u − u =
∫ t
0
Sλ(s)Aλu ds. (10.32)
In addition
||Sλ(s)Aλu − S(s)Au|| ≤ ||Sλ(s)|| · ||Aλu − Au||+ ||(Sλ(s)− S(s))Au|| → 0
as λ→∞, if u ∈ D(A). Passing therefore to limits in (10.32), we deduce
S(t)u − u =
∫ t
0
S(s)Au ds
if u ∈ D(A). Thus D(A) ⊆ D(B) and
Bu = limt→0+
S(t)u − ut
= Au (u ∈ D(A)).
Now if λ > 0, λ ∈ ρ(A) ∩ ρ(B). Also (λI − B)(D(A)) = (λI − A)(D(A)) = X,
according to the assumptions of the theorem. Hence (λI −B)∣∣D(A)
is one-to-one and
onto; whence D(A) = D(B). Therefore A = B, and so A is indeed the generator of
S(t)t≥0.
Recall: In a contraction semigroup, we have ||S(t)||X≤ 1
Take A to be a n × n matrix, A = QAQT
||S(t)||X
= ||eAt ||X
= ||QeΛtQT ||X≤ 1
Since
eΛt =
eλ1t · · · 0
.... . .
...
0 · · · eλnt
,
we need all λi ≤ 0. In general, we can only expect that
||S(t)||X≤ Ceλt , λ > 0.
Remark: S(t)t≥0 is called an ω-contraction semigroup if
||S(t)||X≤ eωt , t ≥ 0
variant of the Hille-Yoshida Theorem:
If A closed and densely defined, then A is a generator of an ω-contraction semigroup ⇐⇒(ω,∞) ⊂ ρ(A), ||Rλ||X ≤ 1
λ−ω and λ > ωGregory T. von Nessi
Versio
n::3
11
02
01
1
10.10 Applications to 2nd Order Parabolic PDE335
10.10 Applications to 2nd Order Parabolic PDE
ut + Lu = 0 in ΩT
u = 0 on ∂Ω× [0, T ]
u = g on Ω× t = 0
L strongly elliptic in divergence form, smooth coefficients independent of t.
Idea: Consider this as an ODE, and get U(t) = S(t)g, where S(t) is the semigroup gener-
ated by −L.
Choose: X = L2(Ω), A = −LD(A) = H2(Ω) ∩H1
0(Ω)
A is unbounded, but we have the estimate
||u||2H1
0(Ω)≤ L(u, u) + γ||u||2L2(Ω), γ ≥ 0
Theorem 10.10.1 (): A generates a γ-contraction semigroup on X.
Proof: Check the assumptions of the Hille-Yoshida Theorem:
• D(A) dense in X: This is clearly true since we can approximate f ∈ L2(Ω) even with
fk ∈ C∞c (Ω).
• A is a closed operator: This means that if we have uk ∈ D(A) with uk → u and
Auk → f , then u ∈ D(A) and Au = f .
To show this, set
Auk = fk , Aul − Auk︸ ︷︷ ︸−Lul+Luk
= fl − fk
Elliptic regularity: (Auk = fk is an elliptic equation)
||uk − ul ||H2(Ω) ≤ C[||fk − fl ||L2(Ω) + ||uk − ul ||L2(Ω)
]
(HW: Show this for −∆u + u = f ; L(u, u) =∫
Ω |Du|2 + |u|2 dx = ||u||2H1(Ω)
)
||uk − ul ||H2(Ω) ≤ C(||Auk − Aul ||L2(Ω)︸ ︷︷ ︸→0
+ ||uk − ul ||L2(Ω)︸ ︷︷ ︸→0
)
=⇒ uk is Cauchy in L2(Ω)
=⇒ u ∈ H2(Ω) ∩H10(Ω), u ∈ D(A)
=⇒ since u ∈ H2(Ω), we have Auk → Au. We also know that Auk → f (assumed),
thus Au = f in L2(Ω)
• Now we need to show (γ,∞) ⊂ ρ(A), ||Rλ|| ≤ 1λ−γ , λ > 0
Use Fredholm’s Alternative:Lu + λu = f in Ω
u = 0 on ∂Ω
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
336Chapter 10: Linear Elliptic PDE
has a unique solution for all f ∈ L2(Ω) (by Lax-Milgram), and the solution u ∈H2(Ω) ∩H1
0(Ω)
=⇒ −Au + λu = f
⇐⇒ (λI − A)u = f ∈ L2(Ω), remembering u ∈ H2(Ω) ∩ H10(Ω), we see that in
particular (Iλ− A)−1 exists and
(λI − A)−1 : L2(Ω) = X → D(A)
=⇒ λ ∈ ρ(A) and Rλ : (λI − A)−1 exists.
What Remains: Resolvent estimate. Consider the weak form
L(u, v) + λ(u, v)L2(Ω) = (f , v)L2(Ω)
u = v : L(u, u) + λ||u||2L2(Ω) = (f , u)L2(Ω) ≤ ||f ||L2(Ω)||u||L2(Ω).
Remembering the estimate β||u||2H1
0(Ω)− γ||u||2
L2(Ω)≤ L(u, u), we see that
β||u||H10(Ω)2 + (λ− γ)||u||2L2(Ω) ≤ ||f ||L2(Ω)||u||L2(Ω)
=⇒ (λ− γ)||u||2L2(Ω) ≤ ||f ||L2(Ω)||u||L2(Ω)
=⇒ (λ− γ)||u||L2(Ω) ≤ ||f ||L2(Ω)
⇐⇒||Rλf ||L2(Ω)
||f ||L2(Ω)
≤ 1
λ− γ =⇒ ||Rλ|| ≤1
λ− γThis proves the resolvent estimate.
Now by the Hille-Yoshida Theorem, we have that A defines a γ-contraction semigroup.
Definition 10.10.2: If a function u ∈ C1([0, T ];X)∩C0([0, T ];D(A)) solves u = Au+ f (t),
u(0) = u0, then u is called a classical solution of the equation.
Theorem 10.10.3 (): Suppose that u is a classical solution, then u is represented by
u(t) = S(t)u0 +
∫ t
0
S(t − s)f (s) ds. (10.33)
(this is the formula given by the variation of constant method. Specifically S(t)u0 corresponds
to the homogeneous equation with initial data u0, and∫ t
0 S(t − s)f (s) ds corresponds to
the inhomogeneous equation with 0 initial data).
Proof: Differentiate.
Theorem 10.10.4 (): Suppose you have u0 ∈ D(A), f ∈ C0([0, T ];X) and that in addition
f ∈ W 1,1([0, T ];X) or f ∈ L2([0, T ];D(A)), then (10.33) gives a classical solution of the
abstract ODE u = Au + f , u(0) = u0.
Remark: Parabolic equation:ut + Lu = f (t) in Ω× [0, T ]
u = 0 on ∂Ω× [0, T ]
u = g in Ω× t = 0In particular, if f ∈ L2(Ω) and independent of t, and if u0 ∈ H2(Ω) ∩ H1
0(Ω) = D(A), then
there exists a classical solution of the parabolic equation.
Proofs of the last two theorems can be found in [RR04].Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.11 Exercises337
10.11 Exercises
9.1: [Qualifying exam 08/00] Let Ω ⊂ Rn be open and suppose that the operator L, formally
defined by
Lu = −n∑
i ,j=1
ai juxixj
has smooth coefficients ai j : Omega→ R. Suppose that L is uniformly elliptic, i.e., for some
c > 0,
n∑
i ,j=1
ai jξiξj ≥ c |ξ|2 ∀ξ ∈ Rn
uniformly in Ω. Show that if λ > 0 is sufficiently large, then whenever Lu = 0, we have
L(|Du|2 + λ|u|2) ≤ 0.
9.2: [Qualifying exam 01/02] Let B be the unit ball in R2. Prove that there is a constant C
such that
||u||L2(B) ≤ C(||Du||L2(B) + ||u||L2(B)) ∀u ∈ C1(B).
9.3: [Qualifying exam 01/02] Let Ω be the unit ball in R2.
a) Find the weak formulation of the boundary value problem
−∆u = f in Ω,
u ± ∂nu = 0 on ∂Ω.
b) Decide for which choice of the sign this problem has a unique weak solution u ∈ H1(Ω)
for all f ∈ L2(Ω). For this purpose you may use without proof the estimate
||u||L2(Ω) ≤ γ(||Du||L2(Ω) + ||u||L2(∂Ω)
)∀u ∈ H1(Ω).
c) For the sign that does not allow a unique weak solution, prove non-uniqueness by
finding two smooth solutions of the boundary value problem with f = 0.
9.4: [John, Chapter 6, Problem 1] Let n = 3 and Ω = B(0, π) be the ball centered at zero
with radius π. Let f ∈ L2(Ω). Show that a solution u of the boundary value problem
∆u + u = f in Ω
u = 0 on ∂Ω,
can only exists if
∫
Ω
f (x)sin |x ||x | dx = 0.
9.5: Let f ∈ L2(Rn) and let u ∈ W 1,2(Rn) be a weak solution of
−∆u + u = f in Rn,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
338Chapter 10: Linear Elliptic PDE
that is,∫
Rn(Du ·Dφ+ uφ) dx =
∫
Rnf φ dx ∀φ ∈ W 1,2(Rn).
Show that u ∈ W 2,2(Rn).
Hint: It suffices to prove that Dku ∈ W 1,2(Rn). In order to show this, check first that
uh = u(·+ hek) is a weak solution of the equation
−∆uh + uh = fh in Rn,
with fh = f (·+hek). Then derive an equation for (uh−u)/h and use uh−u as a test function.
9.6: [Qualifying exam 08/00] Let Ω be the unit ball in R2. Suppose that u, v : Ω → Rare smooth and satisfy
∆u = ∆v in Ω,
u = 0 on ∂Ω.
a) Prove that there is a constant C independent of u and v such that
||u||H1(Ω) ≤ C||v ||H1(Ω).
b) Show there is no constant C independent of u and v such that
||u||L2(Ω) ≤ C||v ||L2(Ω).
Hint: Consider sequences satisfying un − vn = 1.
9.7: [Qualifying exam 08/02] Let Ω be a bounded and open set in Rn with smooth boundary
Γ . Let q(x), f (x) ∈ L2(Ω) and suppose that∫
Ω
q(x) dx = 0.
Let u(x, t) be the solution of the initial-boundary value problem
ut − ∆u = q x ∈ Ω, t > 0,
u(x, 0) = f (x) x ∈ Ω,
Du · ν = 0 on Γ.
Show that ||u(·, t)− v ||W 1,2(Ω) → 0 exponentially as t →∞, where v is the solution of the
boundary value problem−∆v = q x ∈ Ω,
Dv · ν = 0 onΓ,
that satisfies∫
Ω
v(x) dx =
∫
Ω
f (x) dx.
9.8: [Qualifying exam 01/97] Let Ω ⊂ Rm be a bounded domain with smooth boundary, and
let u(x, t) be the solution of the initial-boundary value problem
utt − c2∆u = q(x, t) for x ∈ Ω, t > 0,
u = 0 for x ∈ ∂Ω, t > 0
u(x, 0) = ut(x, 0) = 0 for x ∈ Ω.Gregory T. von Nessi
Versio
n::3
11
02
01
1
10.11 Exercises339
Let φn(x)∞n=1 be a complete orthonormal set of eigenfunctions for the Laplace operator ∆
in Ω with Dirichlet boundary conditions, so −∆φn = λnφn. Set ωn = c√λn. Suppose
q(x, t) =
∞∑
n=1
qn(t)φn(x),
where each qn(t) is continuous,∑q2n(t) < ∞ and |qn(t)| ≤ Ce−nt . Show that there are
sequences an and bn such that∑
(a2n + b2
n) <∞ and if
v(x, t) =
∞∑
n=1
[an cos(ωnt) + bn sin(ωnt)]φn(x),
then
||u(·, t)− v(·, t)||L2(Ω) → 0
at an exponential rate as t →∞.
9.9: [Qualifying exam 08/99] Let Ω ⊂ Rn be a bounded open set and ai j ∈ C(Ω) for
i , j = 1, . . . , n, with∑i ,j a
i j(x)ξiξj ≥ |ξ|2 for all ξ ∈ Rn, x ∈ Ω. Suppose u ∈ H1(Ω)∩C(Ω)
is a weak solution of
−∑
i ,j
∂j(ai j∂iu) = 0 in Ω.
Set w = u2. Show that w ∈ H1(Ω), and show that w is a weak subsolution, i.e., prove that
B(w, v) :=
∫
Ω
∑
i ,j
ai j∂jw∂iv dx ≤ 0
for all v ∈ H10(Ω) such that v ≥ 0.
9.10: [Qualifying exam 01/00] Let Ω = B(0, 1) be the unit ball in Rn. Using iteration
and the maximum principle (and other result with proper justification), prove that there
exists some solution to the boundary value problem
−∆u = arctan(u + 1) in Ω,
u = 0 on ∂Ω.
9.11: [Qualifying exam 01/97] Suppose f : Rn → R is continuous, nonnegative and has
compact support. Consider the initial value problem
ut − ∆u = u2 for x ∈ Rn, 0 < t < T,
u(x, 0) = f (x) for x ∈ Rn.
Using Duhamel’s principle, reformulate the initial value problem as a fixed point problem for
an integral equation. Prove that if T > 0 is sufficiently small, the integral equation has a
nonnegative solution in C(Rn × [0, T ]).
9.12: Let X = L2(R) and define for t > 0 the operator S(t0 : X → X by
(S(t)u)(x) = u(x + t).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
340Chapter 10: Linear Elliptic PDE
Show that S(t) is a semigroup. Find D(A) and A, the infinitesimal generator of the semi-
group.
Hint: You may use without proof that for f ∈ Lp(R)
||f (·+ h)− f (·)||Lp(R) → 0 as |h| → 0
for p ∈ [1,∞).
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
342Chapter 11: Distributions and Fourier Transforms
11 Distributions and Fourier Transforms
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
11.1 Distributions 348
11.2 Products and Convolutions of Distributions; Fundamental Solutions 354
11.3 Fourier Transform of S(Rn) 357
11.4 Definition of Fourier Transform on L2(Rn) 361
11.5 Applications of FTs: Solutions of PDEs 364
11.6 Fourier Transform of Tempered Distributions 365
11.7 Exercises 366
11.1 Distributions
[RR04]
11.1.1 Distributions on C∞ Functions
Definition 11.1.1: Suppose Ω non-empty, open set in Rn. f is a test function if f ∈ C∞c (Ω),
i.e., f is infinitely many time differentiable, supp(f ) ⊂ K ⊂⊂ Ω, K compact. The set of all
test functions f is denoted by D(Ω).
Definition 11.1.2: φnn∈N, φ ∈ D(Ω). Then φn → φ in D(Ω) if
i.) There exists a compact set K ⊂⊂ Ω, such that supp(φn) ⊆ K ∀n, supp(φ) ⊂ K.
ii.) φn and arbitrary derivatives of φn converge uniformly to φ and the corresponding deriva-
tives of φ.Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.1 Distributions343
Definition 11.1.3: A distribution or generalized function is a linear mapping φ 7→ (f , φ)
from D(Ω) into R which is continuous in the following sense: If φn → φ in D(Ω) then
(f , φn)→ (f , φ). The set of all distributions is denoted D′(Ω).
Example: If f ∈ C(Ω), then f gives a distribution Tf defined by
(Tf , φ) =
∫
Ω
f (x)φ(x) dx
Not all distributions can be identified with functions. If Ω = D(0, 1), we can define δ0 by
(δ0, φ) = φ(0).
Lemma 11.1.4 (): f ∈ D′(Ω), K ⊂⊂ Ω. Then there exists a nK ∈ N and a constant CKwith the following property:
|(f , φ)| ≤ CK∑
|α|≤nKmaxx∈K|Dαφ(x)|
for all test functions φ with supp(φ) ⊆ K.
Remarks:
1) The examples above satisfy this with n = 0.
2) If nK can be chosen independent of K, then the smallest n with this property is called
the order of the distribution.
3) There are distributions with n =∞Proof: Suppose the contrary. Then there exists a compact set K ⊂⊂ Ω and a sequence of
functions ψn with supp(ψn) ⊆ K and
|(f , ψn)| ≥ n∑
|α|≤nmaxx∈K|Dαψn(x)| > 0
Define φn = ψn(f ,ψn) . Then φn → 0 in D(Ω)
• supp(φn) ⊆ K ⊂⊂ Ω.
• |α| = k , n ≥ k . Then
|Dαφn(x)| =|Dαψn(y)|
(f , ψn)≤ Dαψn(y)
n∑|β|≤n maxx∈K |Dαψn(x)|
≤ 1
n
=⇒ Dαφn → 0 uniformly as n →∞.
However,
(f , φn) =(f , ψn)
(f , ψn)contradiction.
Example: We can define a distribution f by
(f , φ) = −φ′(0)
‖(δ0)′
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
344Chapter 11: Distributions and Fourier Transforms
|(f , φ)| ≤ supx∈K |φ′(x)|, f is a distribution of order one.
Localization: G ⊂ Ω, G open, then D(G) ⊆ D(Ω). f ∈ D′(Ω) defines a distribution in
D′(G) simply by restriction. f ∈ D′(Ω) vanishes on the open set G if (f , φ) = 0 for all
φ ∈ D(G). With this in mind, we say that f = g on G if f − g vanishes on G. If f ∈ D′(Ω),
then one can show that there exists a largest open set G ⊂ Ω, such that f vanishes on G
(this can be the empty set). The complement of this set in Ω is called the support of f .
Example: Ω = B(0, 1), f = δ0, supp(δ0) = 0.
Definition 11.1.5: A sequence fn ∈ D′(Ω) converges to f ∈ D′(Ω) if (fn, φ)→ (f , φ) for all
φ ∈ D(Ω).
Example: Ω = R
fn(x) =
n 0 ≤ x ≤ 1
n
0 else
(fn, φ) =
∫
Rfn(x) dx
= n
∫ 1n
0
φ(x) dx =11n
∫ 1n
0
φ(x) dx = φ(0) = (δ0, φ)
=⇒ fn → δ0 in D′(Ω)
Analogously: Standard mollifiers, φ
0
1
−1 0 1
supp(φ) ⊆ [−1, 1],∫R φ(x) dx = 1
φε = ε−1φ(ε−1x) then φε → δ0 in D′(Ω) as ε→ 0.
Theorem 11.1.6 (): fn ∈ D′(Ω) such that (fn, φ) converges for all test function φ, then
there ∃f ∈ D′(Ω) such that fn → f .Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.1 Distributions345
Proof: (See Renardy and Rogers) Define f by
(f , φ) = limn→∞
(fn, φ).
Then f is clearly linear, and it remains to check the required continuity property of f .
Definition 11.1.7: • S(Rn) is the space of all C∞(Rn) functions such that
|x |k |Dαφ(x)|
is bounded for every k ∈ N, for every multi-index α.
• We say that φn → φ in S(Rn) if the derivatives of φn of all orders converge uniformly
to the derivatives of φ, and if the constants Ck,α in the estimate
|x |k |Dαφn(x)| ≤ Ck,α
are independent of n.
• S(Rn) = space of rapidly decreasing function or the Schwarz space.
Example: φ(x) = e−|x |2
11.1.2 Tempered Distributions
Definition 11.1.8: • A tempered distribution is a linear mapping φ 7→ (f , φ) from S(Rn)
into R which is continuous in the sense that (f , φn)→ (f , φ) if φn → φ in S(Rn).
• The set of all tempered distributions is denoted by S ′(Rn)
11.1.3 Distributional Derivatives
Definition 11.1.9: Let f ∈ D′(Ω), then the derivative of f with respect to xi is defined by
(∂f
∂xi, φ
)= −
(f ,∂φ
∂xi
)(11.1)
Remarks:
1) If f ∈ C(Ω), then the distributional derivative is the classical derivative, and (11.1) is
just the formula for integration by parts.
2) Analogously, definition for higher order derivatives: α ∈ Rn is a multi-index
(Dαf , φ) = (−1)|α|(f , Dαφ)
3) Taking derivatives of distributions is continuous in the following sense
if fn → f in D′(Ω), then
∂fn∂xi→ ∂f
∂xiin D′(Ω)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
346Chapter 11: Distributions and Fourier Transforms
4) We can define a “translated” distribution f (x + hei) by
(f (x + hei), φ) = (f , φ(x − hei)).
Then, we see that
∂f
∂xi= limh→0
f (x + hei)− f (x)
h
is a limit of difference quotients.
Example:
H(x) =
1 x > 0
0 x ≤ 0in D′(R)
−1 0 1
0
1
this is not weakly diffentiable
dHdx in the sense of distributions:
(dH
dx, φ
)= −
(H,dφ
dx
)= −
∫
RH(x)
dφ
dxdx
= −∫ ∞
0
dφ
dxdx = −φ
∣∣∣∣∞
0
= φ(0) = (δ0, φ)
Recall:
• D(Ω) = test functions, φn → φ in D(Ω) means that supp(φn) ⊂ K ⊂⊂ Ω and
Dαφn → Dαφ uniformly.
• D′(Ω) = distributions on Ω = linear functionals of D(Ω), (f , φn)→ (f , φ) if φn → φ
• S(Rn) = rapidly decreasing test functions, φn → φ in S(Rn) means that Dαφn → Dαφ
uniformly ∀α.
|x |k |Dαφn| ≤ Ck,α independent of n
• S ′(Rn) = tempered distributions = linear functionals of S(Rn)
• f ∈ D′(Ω), (Dαf , φ) = (−1)|α|(f , Dαφ)Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.1 Distributions347
Example: Ω ⊂ Rn open and bounded with smooth boundary, f ∈ C1(Ω). f defines a
distribution in D′(Ω)
(∂f
∂xj, φ
)= −
(f ,∂φ
∂xj
)= −
∫
Rnf (x)
∂φ
∂xj(x) dx = −
∫
Ω
f (x)∂φ
∂xj(x) dx
= −∫
∂Ω
f (x)φ(x)νj dS +
∫
Ω
∂f
∂xj(x)φ(x) dx
distributional derivatives have two terms: a volume term and a boundary term.
Proposition 11.1.10 (): Ω open, bounded and connected. u ∈ D′(Ω) such that ∇u = 0 (in
the sense of distributions), then u ≡ constant.
Proof: We do this only for n = 1; only a suitable induction is needed to prove the general
case.
Ω = I = (a, b); by assumption u′ = 0 or 0 = (u′, φ) = −(u, φ′)=⇒ (u, ψ) = 0 whenever ψ = φ′, φ ∈ D(I);
But ψ = φ′ ⇐⇒∫I ψ(s) ds = 0 and ψ ∈ C∞c (I).
Choose any φ0 ∈ D(I) with∫ ba φ0(s) ds = 1
φ ∈ D(I), φ(x) = [φ(x)− φ · φ0(x)]︸ ︷︷ ︸∫[φ(x) − φ ·φ0(x)] dx =
φ−φ∫ b
aφ0(x) dx︸ ︷︷ ︸
=1
= 0
+φ · φ0(x),
where φ =∫ ba φ(s) ds.
=⇒ φ(x) = ψ′(x) + φ · φ0(x)
=⇒ (u, φ) = (u, ψ′)︸ ︷︷ ︸=0
+(u, φ · φ0(x)) = (u, φ0)
∫ b
a
φ(x) dx
=⇒ u = (u, φ0) = constant.
Recall:
• D(Ω) = test functions, φn → φ in D(Ω) means that supp(φn) ⊂ K ⊂⊂ Ω and
Dαφn → Dαφ uniformly.
• D′(Ω) = distributions on Ω = linear functionals of D(Ω), (f , φn)→ (f , φ) if φn → φ
• S(Rn) = rapidly decreasing test functions, φn → φ in S(Rn) means that Dαφn → Dαφ
uniformly ∀α.
|x |k |Dαφn| ≤ Ck,α independent of n
• S ′(Rn) = tempered distributions = linear functionals of S(Rn)
• f ∈ D′(Ω), (Dαf , φ) = (−1)|α|(f , Dαφ)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
348Chapter 11: Distributions and Fourier Transforms
Example: Ω ⊂ Rn open and bounded with smooth boundary, f ∈ C1(Ω). f defines a
distribution in D′(Ω)(∂f
∂xj, φ
)= −
(f ,∂φ
∂xj
)= −
∫
Rnf (x)
∂φ
∂xj(x) dx = −
∫
Ω
f (x)∂φ
∂xj(x) dx
= −∫
∂Ω
f (x)φ(x)νj dS +
∫
Ω
∂f
∂xj(x)φ(x) dx
distributional derivatives have two terms: a volume term and a boundary term.
11.2 Products and Convolutions of Distributions; Funda-
mental Solutions
Difficulty: No good definition of f · g for f , g ∈ D′(Ω).
For example: f , g ∈ L1(Ω), but f · g /∈ L1loc(Ω), then f · g /∈ D′(Ω).
However, if f ∈ D′(Rp), g ∈ D′(Rq), then we can define f ·g ∈ D′(Rp+q) in the following
way:
Definition 11.2.1: If f ∈ D′(Rp), g ∈ D′(Rq), then we define f (x)g(x) ∈ D′(Rp+q) by
(f (x)g(y), φ(x, y)) = (f (x), (g(y), φ(x, y))︸ ︷︷ ︸consider x as a
parameter
)
Check:
• (g(y), φ(x, y)) is a test function.
• Continuity: (f (x)g(y), φn(x, y))→ (f (x)g(y), φ(x, y)).
• Commutative operation: f (x)g(y) = g(y)f (x). To check this, use that test functions
of the form φ(x, y) = φ1(x)φ2(y) are dense
Example:
(δ0(x)δ0(y), φ(x, y)) = (δ0(x), (δ0(y), φ(x, y)))
= (δ0(x), φ(x, 0))
= φ(0, 0) = δ0(x, y)
11.2.1 Convolutions of Distributions
Suppose f , g ∈ S(Rn) (D(Rn))
(f ∗ g, φ) =
∫
Rn(f ∗ g)(x)φ(x) dx
=
∫
Rn
∫
Rnf (x − y)g(y)φ(x) dydx
=
∫
Rn
[∫
Rnf (x − y)φ(x) dx
]g(y) dy
=
∫
Rn
∫
Rnf (x)g(y)φ(x + y) dxdy
= (f (x)g(y), φ(x + y))Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.2 Products and Convolutions of Distributions; Fundamental Solutions349
Suggestion: If f , g ∈ S ′(Rn), then
(f ∗ g, φ) = (f (x)g(y), φ(x + y))
Difficulty: φ(x + y) does not have compact support even if φ does.
y ∈ Rn
x ∈ Rn
x + y = 0
supp(φ(x))
Sufficient condition: We can define f ∗ g the following way: f (x)g(y) has compact support,
e.g., f or g has compact support. We then modify φ(x + y) outside the support to be
compactly supported/rapidly decreasing.
Example:
1)
(δ0 ∗ f , φ) = (δ0(x)f (y), φ(x + y))
(commutativity) = (f (y)δ0(x), φ(x + y))
= (f (y), (δ0(x), φ(x + y)))
= (f (y), φ(y))
= (f , φ)
=⇒ δ0 ∗ f = f
2) f ∈ D′(Rn), ψ ∈ D(Rn)
(f ∗ ψ, φ) = (f (x)ψ(y), φ(x + y))
= (f (x), (ψ(y), φ(x + y)))
=
(f (x),
∫
Rnψ(y)φ(x + y) dy
)
=
(f (x),
∫
Rnψ(y − x)φ(y) dy
)
(justified by Riemann Sums) =
∫
Rn(f (x), ψ(x − y))φ(y) dy
Therefore, we see that f ∗ ψ can be identified with the function (f (x), ψ(x − y)) ∈C∞(Rn) (this function being C∞ needs to be checked).
Remark: Suppose that fn → f in D′(Rn) and that either ∃K compact, K ⊂ Rn with
supp(fn) ⊆ K, or that g ∈ D′(Rn) has compact support, then
fn ∗ g → f ∗ g in D′(Rn)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
350Chapter 11: Distributions and Fourier Transforms
Theorem 11.2.2 (): D(Rn) is dense in D′(Rn).
Example: δ0 ↔ φε, where φε(x) = ε−nφ(ε−1x), φ ≥ 0 and∫Rn φ(x) dx = 1.
Idea of Proof:
(1) Distributions with compact support are dense in D′(Rn). To show this, choose test
functions φn ∈ D(Rn) with φn ≡ 1 on Bn(0). Now define φn · f ∈ D′(Rn), (φn · f , φ) =
(f , φn · φ) so that we have φn · f → f in D′(Rn) and φn · f has compact support.
(2) Distributions with compact support are limits of test functions. To show this choose
φε = ε−nφ(ε−1x), φ ≥ 0 and∫Rn φ(x) dx = 1 standard mollifier. Then φn ∗ f is a test
function and by the above remark,
φn ∗ f → δ0 ∗ f = f
11.2.2 Derivatives of Convolutions
(Dα(f ∗ g), φ) = (−1)|α|(f ∗ g,Dαφ)
= (−1)|α|(f (x)g(y), Dαφ(x + y))
= (−1)|α|(g(y), (f (x), Dαφ(x + y)))
= (g(y), (Dαf (x), φ(x + y)))
= (g(y)Dαf (x), φ(x − y))
= (Dαf ∗ g, φ) = (f ∗Dαg, φ)
11.2.3 Distributions as Fundemental Solutions
L(D) is a differential operator
∆ = L(D) =
n∑
j=1
aj ·D2j
ut − ∆u = L(D) = b ·Dt −n∑
j=1
aj ·D2j
Definition 11.2.3: Suppose that L(D) is a differential operator with constant coefficients.
A fundamental solution for L is a distribution G ∈ D′(Rn) satisfying L(D)G = δ0.
Remark: Why is this a good definition?
L(D)(G ∗ f ) = (L(D)G) ∗ f = δ0 ∗ f = f
The first equality of the above uses the result obtained for differentiating convolutions. So
if G ∗ f is defined (e.g. f has compact support), then u = G ∗ f is a solution of L(D)u = f
Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.3 Fourier Transform of S(Rn)351
Example: f (x) = 1|x | in R3
Check: f is up to a constant a fundamental solution for the Laplace operator.
(∆f , φ) = (f ,∆φ)
=
∫
R3
f (x)∆φ(x) dx
= limε→0
∫
R3\Bε(0)
1
|x |∆φ(x) dx
= limε→0
[−∫
R3\Bε(0)
(D
1
|x |
)·Dφ(x) dx −
∫
∂Bε(0)
1
rDφ(x) · ν dS
]
= limε→0
[ ∫
R3\Bε(0)
∆1
|x |︸︷︷︸=0
φ(x) dx +
∫
∂Bε(0)
D1
|x | · νφ(x) dx
︸ ︷︷ ︸:=I
−∫
∂Bε(0)
1
rDφ(x) · ν dS
︸ ︷︷ ︸∼Cε→0
]
So, from PDE I we know
I = −4π−∫
Bε(0)
φ(x) dS → −4πφ(0)
= −4π(δ0, φ).
This gives us the correct notion of what “∆G = δ0” really means.
11.3 Fourier Transform of S(Rn)
Definition 11.3.1: If f ∈ S(Rn) then we define
F f (ξ) = f (ξ) = (2π)−n2
∫
Rne−ix ·ξf (x) dx
F−1f (x) = f (x) = (2π)−n2
∫
Rne ix ·ξf (ξ) dξ
F is the Fourier transform
F−1 is the inverse Fourier transform.
Two important formulas
1. Dα(f (ξ) = F [(−ix)αf ](ξ)
2. (iξ)β f (ξ) = F [Dβf ](ξ)
Proof:
1.
Dαξ f (ξ) = Dαξ
[(2π)−
n2
∫
Rne−ix ·ξf (x) dx
]
= (2π)−n2
∫
RnDαξ [e−ix ·ξf (x)] dx = F [(−ix)αf ](ξ)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
352Chapter 11: Distributions and Fourier Transforms
2.
(iξ)β f (ξ) = (2π)−n2
∫
Rn(iξ)βe−ix ·ξf (x) dx
= (2π)−n2
∫
Rn(−1)|β|(−iξ)βe−ix ·ξf (x) dx
= (2π)−n2
∫
Rn(−1)|β|Dβx e
−ix ·ξ · f (x) dx
I.P.= (2π)−
n2
∫
Rne−ix ·ξ ·Dβf (x) dx = F [Dβf ](ξ).
Proposition 11.3.2 (): F : S(Rn)→ S(Rn) is linear and continuous.
Proposition 11.3.3 (): F−1 : S(Rn)→ S(Rn) is linear and continuous.
Proof:
(−1)|β|+|α|ξβ ·Dαξ f (ξ) = (2π)−n2
∫
Rne−ix ·ξ Dβ(xαf (x))︸ ︷︷ ︸
finite sum of
rapidly decreas-
ing functions
dx
=⇒ f (ξ) is rapidly decreasing, f ∈ S(Rn)
continuous: fn → 0 in S(Rn) then fn → 0 in S(Rn). Suppose that fn → 0, show that
Dαfn → 0 uniformly ∀α ∈ Nn. It will be sufficient to show this for α = 0,
fn(ξ) = (2π)−n2
∫
Rne−ix ·ξfn(x) dx
= (2π)−n2
∫
BR(0)
e−ix ·ξf (x) dx +
∫
Rn\BR(0)
e−ix ·ξ f (x)︸︷︷︸≤ 1
|x |k
dx
(f (x) ≤ 1|x |k since f ∈ S(Rn)).
say k = n + 2
∫
Rn\BR(0)
|fn(x)| dx ≤∫
Rn\BR(0)
C
|x |n+2dx = C
∫ ∞
R
ρn−1
ρn+2dρ
= C
∫ ∞
R
1
ρ3dρ = C
1
R2
2nd integral as small as we want if R large enough.
1st integral as small as we want if n is large enough since fn → 0 uniformly. This analogously
for higher derivatives.
Theorem 11.3.4 (): If f ∈ S(Rn), then
F−1(F f ) = f , F(F−1f ) = f
i.e., F−1is the inverse of the Fourier transform.Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.3 Fourier Transform of S(Rn)353
Fourier Transform
(F f )(ξ) = f (ξ) = (2π)−n2
∫
Rne−ix ·ξf (x) dx
(F−1f )(ξ) = f (ξ) = f (ξ) = (2π)−n2
∫
Rne ix ·ξf (x) dx
Example:
f (x) = e−|x |2/2, f (y) = e−|y |
2/2
f (y) = (2π)−n2
∫
Rne−ix ·ξe−|x |
2/2 dx!
= e−|y |2/2
It is sufficient to show this for n = 1.
(2π)−12
∫
Re−t
2/2e−it·u dt︸ ︷︷ ︸
= e−u2/2
∫
Re−t
2/2−it·u dx = e−u2/2
∫
Re−(t+iu)2/2 dt (completing the square)
Once way to evaluate the integral is to realize that e−z2/2 is analytic, thus by Cauchy we
know that integrating this on a closed contour will yield a value of 0.
γ4
R
ℑ
λ
C
−λ
γ2
γ1
γ3iu
γ1(s) = s, s ∈ (−λ, λ)γ2(s) = λ+ i s, s ∈ (0, u)γ3(s) = −s + iu, s ∈ (−λ, λ)γ4(s) = −λ+ i(u − s), s ∈ (0, u)
0 =
∫
C
e−z2/2 ds
=
∫ λ
−λe−s
2/2 ds +
∫ u
0
e−(λ+is)2/2 ids
︸ ︷︷ ︸:=I2
+
∫ λ
−λe−(−s+iu)2/2 (−1)ds
+
∫ u
0
e−(−λ+i(u−s))2/2 (−i)ds
Looking at the second integral:
I2 =
∫ u
0
e−λ2/2
︸ ︷︷ ︸→ 0 as
λ→∞
e−iλses2/2
︸ ︷︷ ︸≤C
ds → 0 as λ→∞
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
354Chapter 11: Distributions and Fourier Transforms
Similarly, I4 → 0 as λ→∞. Thus, in the limit we have
limλ→∞
∫ λ
−λe−s
2/2 ds = limλ→∞
∫ λ
−λe−(−s+iu)2/2 ds
=⇒∫ ∞
−∞e−s
2/2 ds
︸ ︷︷ ︸=√
2π
=
∫ ∞
−∞e−(t+iu)2/2 dt =
∫ ∞
−∞e−t
2/2e−iuteu2/2 dt
Remark: The generalization of this is the following: if f (x) = e−ε|x |2, then f (ξ) = (2ε)−n/2e−|ξ|
2/4ε.
This is basically the same calculation as the previous example.
Theorem 11.3.5 (): f ∈ S(Rn), F−1F f = f , FF−1f = f .
Proof: f , g ∈ S(Rn)∫
Rng(ξ)f (ξ)e ix ·ξ dx =
∫
Rng(ξ)
[(2π)−
n2
∫
Rne−iy ·ξf (y) dy
]e ix ·ξ dξ
= (2π)−n2
∫
Rn
[∫
Rne−iξ(y−x)g(ξ) dξ
]f (y) dy
=
∫
Rng(y − x)f (y) dy =
∫
Rng(y)f (x + y) dy
Replace g(ξ) by g(ε · ξ). Need to computer g(ε·):
(2π)−n2
∫
Rne−iy ·ξg(ε · ξ) dξ = (2π)−
n2
1
εn
∫
Rne−iy ·
zε g(z) dz
=1
εng(yε
)
So, we see that∫
Rng(ε · ξ)f (ξ)e ix ·ξ dξ =
∫
Rng(ε·)(y) · f (x + y) dy
=1
εn
∫
Rng(yε
)f (x + y) dy
=
∫
Rng(y)f (x + εy) dy
Choose g(x) = e−|x |2/2 and take the limit ε→ 0.
g(0)
∫
Rnf (ξ)e ix ·ξ dξ = f (x)
∫
Rng(y) dy
= f (x)
∫
Rne−|y |
2/2 dy = (√
2π)−n2 f (x)
∴ f (x) = (2π)−n2
∫
Rne ix ·ξ f (ξ) dξ = ((F−1F)f )(x)
The proof of (FF−1f )(x) = f (x) is analogous.
Remark: Suppose L(D) is a differential operator with constant coefficients, i.e.,
L(D) =
n∑
i=1
D2i = −∆.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.4 Definition of Fourier Transform on L2(Rn)355
Then F(L(D)u) = L(iξ)u, where L(iξ) is just the symbol of the operator.
Example: Suppose we can extend Fourier transforms to L2(Rn) (see below), then with
L(D) = −∆ and L(iξ) =∑ξ2i = |ξ|2, we can find the solution of −∆u = f ∈ L2(Rn) by
Fourier Transform:
(−∆u) = f
⇐⇒ |ξ|2u(ξ) = f (ξ)
⇐⇒ u(ξ) =f (ξ)
|ξ|2
⇐⇒ u(x) = F−1
(f (ξ)
|ξ|2 f)
(x)
Theorem 11.3.6 (): f , g ∈ S(Rn), then (f , g) = (f , g), i.e., the Fourier Transform preserves
the inner-product
(f , g) =
∫
Rnf · g dx
Remark: Using a Fourier Transform, we have to use complex valued functions, |z |2 = z ·z .
Proof: Computer. g(x) = (F−1Fg)(x) = (R−1g)(x)
∫
Rnf (x)g(x) dx =
∫
Rnf (x) · (2π)−
n2
∫
Rne ix ·ξg(ξ) dξdx
=
∫
Rng(ξ) · (2π)−
n2
∫
Rnf (x)e ix ·ξ dxdξ
=
∫
Rng(ξ) · (2π)−
n2
∫
Rnf (x)e−ix ·ξ dxdξ
=
∫
Rng(ξ)f (ξ) dξ = (f , g)
11.4 Definition of Fourier Transform on L2(Rn)
Proposition 11.4.1 (): If u, v ∈ L1(Rn) ∩ L2(Rn), then u ∗ v = (2π)n/2u · v .
Proof:
u ∗ v(y) = (2π)−n2
∫
Rne−ix ·y (u ∗ v)(x) dx
= (2π)−n2
∫
Rne−ix ·y
[∫
Rnu(z)v(x − z) dz
]dx
= (2π)−n2
∫
Rne−iz ·yu(z)
∫
Rne−iy(x−z)v(x − z︸ ︷︷ ︸
sub.w
) dxdz
=
∫
Rne−izyu(z)v(y) dz
= (2π)n2 u(y)v(y)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
356Chapter 11: Distributions and Fourier Transforms
Theorem 11.4.2 (Plancherel’s Theorem): Suppose that u ∈ L1(Rn) ∩ L2(Rn), then u ∈L2(Rn) and
||u||L2(Rn) = ||u||L2(Rn)
Proof: Let v(x) = u(−x). Define w = u ∗ v . So, we have w(ξ) = (2π)n/2u(ξ)v(ξ).
Now w ∈ L1(Rn) and w is continuous (using Proposition 8.31). Now,
v(ξ) = (2π)−n2
∫
Rnu(−x) · e−ix ·ξ dx.
Let y = −x , so
v(ξ) = (2π)−n2
∫
Rnu(y) · e−iy ·ξ dy = (2π)−
n2
∫
Rnu(y) · e iy ·ξ dy = u(y)
Now we claim for any v , w ∈ L1(Rn),∫
Rnv(x) · w(x) dx =
∫
Rnv(ξ) · w(ξ) dξ.
Check:∫
Rnv(x) · w(x) dx =
∫
Rnv(x) ·
[(2π)−
n2
∫
Rnw(ξ)e−ix ·ξ dξ
]dx
=
∫
Rn
[(2π)−
n2
∫
Rnv(x)e−ix ·ξ dx
]· w(ξ) dξ
=
∫
Rnv(ξ) · w(ξ) dξ
Now set f (ξ) = e−ε|ξ|2
so f (x) = (2ε)−n/2e−|ξ|2/4ε (as shown in the example earlier). Then,
∫
Rnw(ξ)e−ε|ξ|
2
dξ = (2ε)−n2
∫
Rnw(x)e−ε|x |
2/4ε dx.
Now, take the limit as ε→ 0 to get∫
Rnw(ξ) dξ = (2π)
n2w(0)
(w(0) because the integral was a limit representation of the dirac distribution). But we know
w = u ∗ v and w(ξ) = (2π)n2 u · v(ξ) = (2π)
n2 u · u(ξ) = (2π)
n2 |u|2,
so
(2π)n2w(0) =
∫
Rnw(ξ) dξ = (2π)
n2
∫
Rn|u|2 dξ =⇒ w(0) =
∫
Rn|u|2 dξ.
We also know
w = u ∗ v =⇒ w(0) =
∫
Rnu(y) · v(−y) dy =
∫
Rnu(y) · u(y) dy =
∫
Rn|u|2 dy.
Thus,∫
Rn|u|2 dξ = w(0) =
∫
Rn|u|2 dy =⇒ ||u||L2(Rn) = ||u||L2(Rn)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.5 Applications of FTs: Solutions of PDEs357
Definition 11.4.3: If u ∈ L2(Rn), choose uk ∈ L1(Rn) ∩ L2(Rn) such that uk → u in
L2(Rn). By Plancherel’s Theorem:
||uk − ul ||L2(Rn) = ||uk − ul ||L2(Rn)
= ||uk − ul ||L2(Rn) → 0 as k, l →∞
=⇒ uk is Cauchy in L2(Rn). The limit is defined to be u ∈ L2(Rn).
Note: This definition does not depend on the approximating sequence.
Remarks:
1) We have
||u||L2(Rn) = ||u||L2(Rn)
if u ∈ L2(Rn) by approximation of u in L2(Rn) by functions uk ∈ L1(Rn) ∩ L2(Rn);
Plancherel’s Theorem.
2) You can use this to find integrals of functions. Suppose u, u, then∫
R|u|2 dx =
∫
R|u|2 dx
Theorem 11.4.4 (): If u, v ∈ L2(Rn), then
i.)
(u, v) = (u, v), i.e.,
∫
Rnu · v dx =
∫
Rnu · v dx
ii.) (Dαu)(y) = (iy)αu(y), Dαu ∈ L2(Rn)
iii.) (u ∗ v) = (2π)n/2u · v
iv.) u = ˜u = F−1[F(u)]
Proofs: See Evans.
11.5 Applications of FTs: Solutions of PDEs
(1) Solve −∆u + u = f in Rn, f ∈ L2(Rn).
The Fourier Transform converts the PDE into an algebraic expression.
|y |2u + u = f
⇐⇒ u =f
|y |2 + 1
⇐⇒ u = F−1
(f
|y |2 + 1
)= F−1(f · B),
where B = 1|y |2+1
. Since u ∗ v = (2π)n/2u · v ,
u = f · B = (2π)−n2 (f ∗ B)
u = (2π)−n2 (f ∗ B).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
358Chapter 11: Distributions and Fourier Transforms
Now we need to find B:
Trick:1
a=
∫ ∞
0
e−ta dt
=⇒ 1
|y |2 + 1=
∫ ∞
0
e−t(|y |2+1) dt.
So,
B(x) = (F−1B)(x)
= (2π)−n2
∫
Rne ix ·y
∫ ∞
0
e−t(|y |2+1) dtdy
= (2π)−n2
∫ ∞
0
e−t∫
Rne ix ·ye−t|y |
2
dydt
(change var.) =
∫ ∞
0
e−t(2t)−n2 e−|x |
2/4t dt.
This last integral is called the Bessel Potential.
So, we at last have an explicit representation of our solution:
u(x) = (2π)−n2 (B ∗ f )
= (4π)−n2
∫ ∞
0
∫
Rnt−
n2 e−t−|x−y |
2/4t f (y) dydt
(2) Heat equation:
ut − ∆u = 0 in Rn × (0,∞)
u = g on Rn × t = 0
Take the Fourier transform in the spatial variables:
ut(y)− |y |2u(y) = 0 in Rn × (0,∞)
u(y) = g(y) on Rn × t = 0
Now, we can view this as an ODE in t:
u(y , t) = g(y)e−|y |2t
=⇒ u(x, t) = F−1[g(y)e−|y |2t ](x)
= (2π)−n2 (g ∗ F ),
where
F = e−|y |2t .
Changing variables as before,
F (x) = (2t)−n2 e−|x |
2/4t
=⇒ u(x, t) = (2π)−n2 (g ∗ F )(x, t)
= (4πt)−n2
∫
Rng(y)e−|x−y |
2/4t dy
This is just the representation formula from the last term!Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.6 Fourier Transform of Tempered Distributions359
11.6 Fourier Transform of Tempered Distributions
Definition 11.6.1: If f ∈ S ′(Rn), then we define
(F(f ), φ) = (f ,F−1(φ))
Check that the following holds:
Dαf = F((−ix)αf )
(iξ)f = F(Dαf )
Examples:
(1) f = δ0,
〈F(δ0), φ〉 = 〈δ0,F−1φ〉 = (F−1φ)(0)
= (2π)−n2
∫
Rnφ(x) dx
=⟨
(2π)−n2 , φ⟩
=⇒ F(δ0) = (2π)−n2
(2) F(1), 1 = constant function.
〈F(1), φ〉 = 〈1,F−1φ〉
=
∫
RnF−1(φ)(x) dx
= (2π)n2FF−1(φ)(0)
= (2π)n2φ(0)
=⟨
(2π)n2 δ0, φ
⟩
11.7 Exercises
11.1: Show that the Dirac delta distribution cannot be identified with any continuous func-
tion.
11.2: Let Ω = (0, 1). Find an example of a distribution f ∈ D′(Ω) of infinite order.
11.3: Let a ∈ R, a 6= 0. Find a fundamental solution for L = ddx − a on R.
11.4: Find a sequence of test functions that converges in the sense of distributions in R to δ′.
11.5: Let Ω = B(0, 1) be the unit ball in R2, and let w ∈ W 1,p(B(0, 1);R2) be a vector
valued function, that is, w(x) = (u(x), v(x)) with u, v ∈ W 1,p(Ω). Recall that Dw, the
gradient of w, is given by
Dw =
(∂1u ∂2u
∂1v ∂2v
).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
360Chapter 11: Distributions and Fourier Transforms
a) Suppose w is smooth. Show that
∂1(u · ∂2v)− ∂2(u · ∂1v) = detDw.
b) Let w ∈ W 1,p(B(0, 1);R2). We want to define a distribution DetDw ∈ D′(Ω) by
〈DetDw, φ〉 = −∫
Ω
(u · ∂2v · ∂1φ− u · ∂1v · ∂2φ) dx.
Find the smallest p ≥ 1 such that DetDw defines a distribution, that is, the integral
on the right hand side exists.
c) It is in general not true that DetDw = detDw. Here is an example; let w(x) = x/|x |.Show that there exists a constant c such that
DetDw = cδ0,
but that detDw(x) = 0 for a.e. x ∈ Ω.
11.6: [Qualifying exam 08/98] Assume u : Rn → R is continuous and its first-order distri-
bution derivatives Du are bounded functions, with |Du(x)| ≤ L for all x . By approximating
u by mollification, prove that u is Lipschitz, with
|u(x)− u(y)| ≤ L|x − y | ∀x, y ∈ Rn.
11.7: [Qualifying exam 01/03] Consider the operator
A : D(A)→ L2(Rn)
where D(A) = H2(Rn) ⊂ L2(Rn) and
Au = ∆u for u ∈ D(A).
Show that
a) the resolvent set ρ(A) contains the interval (0,∞), i.e., (0,∞) ⊂ ρ(A);
b) the estimate
||(λI − A)−1|| ≤ 1
λfor λ > 0
holds.
11.8: Let f : R→ R be defined by
f (x) =
sin x if x ∈ (−π, π)
0 else.
Show that
f (ξ) = i
√2√π
sinπξ
ξ2 − 1.
11.9: We define for real-valued functions f and s ≥ 0
Hs(Rn) =
f ∈ L2(Rn) :
∫
Rn(1 + |ξ|2)s |f (ξ)|2 dξ <∞
.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
11.7 Exercises361
a) Show that
f (ξ) = f (−ξ).
b) Let f ∈ Hs(Rn) with s ≥ 1. Show that g(x) = F−1(iξf (ξ))(x) is a (vector-valued)
function with values in the real numbers.
c) Prove that H1(R) = W 1,2(R). (This identity is in fact true in any space dimension for
s ∈ N).
d) Show that f ∈ C0(Rn) if f ∈ L1(Rn). Analogously, if f ∈ L1(Rn), then f ∈ C0(Rn)
(you need not show the latter assertion).
e) Show that Hs(R) ⊂−→ C0(R) if s > n2 . Hint: Use Holder’s inequality.
11.10:
a) Find the Fourier transform of χ[−1,1], the characteristic function of the interval [−1, 1].
b) Find the Fourier transform of the function
f (x) =
2 + x for x ∈ [−2, 0],
2− x for x ∈ [0, 2],
0 else.
Hint: What is χ[−1,1] ∗ χ[−1,1]?
c) Find
∫
R
sin2 ξ
ξ2dξ.
d) Show that χ[−1,1] ∈ H12−ε(R) for all ε ∈
(0, 1
2
].
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
362Chapter 12: Variational Methods
12 Variational Methods
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
12.1 Direct Method of Calculus of Variations 368
12.2 Integral Constraints 377
12.3 Uniqueness of Solutions 384
12.4 Energies 384
12.5 Weak Formulations 385
12.6 A General Existence Theorem 387
12.7 A Few Applications to Semilinear Equations 390
12.8 Regularity 392
12.9 Exercises 400
12.1 Direct Method of Calculus of Variations
We wish to solve −∆u = 0, by using minimizers
F (u) =1
2
∫
Ω
|Du|2 dx
Crucial Question: Can we decide whether there exists a minimizer?
General Question: If X a BS, || · ||, and F : X → R, is there a minima of F?
The answer will be yes, if we have the following:
• F bounded from below
let M = infx∈X
F (x) dx > −∞Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.1 Direct Method of Calculus of Variations363
• Can choose a minimizing sequence xn ∈ X such that
F (xn)→ M as n →∞
• Bounded sets are compact.
Suppose (subsequence) xk → x
• Continuity property:
F (xn)→ F (x) = M, x a minimizer
Turns out the lower-semicontinuity will be enough:
M = limk→∞
inf F (xk) ≥ F (x) ≥ M=⇒ F (x) = M, x minimizer
There is a give and take between convergence and continuity.
Strong Convergence Weak Convergence
Compactness difficult easy
Lower Semicontinuity easy difficult
Our Choice
Definition 12.1.1: 1 < p <∞. A function F is said to be (sequentially) weakly lower semicontinuous
(swls) on W 1,p(Ω) if uk u in W 1,p(Ω) implies that
F (u) ≤ limk→∞
inf F (uk)
Example: Dirichlet integral
F (u) =
∫
Ω
1
2|Du|2 dx
and the generalization
F (u) =
∫
Ω
(1
2|Du|2 + f u
)dx
with f ∈ L2(Ω) in X = H10(Ω)
Goal: ∃! minimizer in H10(Ω) which solves −∆u = f in the weak sense.
(1) Lower bound on F :
F (u) =
∫
Ω
(1
2|Du|2 − f u
)dx
Holder≥∫
Ω
1
2|Du|2 dx − ||f ||L2(Ω)||u||L2(Ω)
Now we use Young’s inequality with(√
εa − b2√ε
)2≥ 0 =⇒ ab ≤ εa2 + b2
4ε
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
364Chapter 12: Variational Methods
Young’s≥∫
Ω
1
2|Du|2 dx − ε||u||2L2(Ω) −
1
4ε||f ||2L2(Ω)
——
Recall Poincare’s inequality in H10(Ω): ∃Cp such that ||u||2
L2(Ω)≤ Cp||Du||2L2(Ω)
——
Poincare≥∫
Ω
1
2|Du|2 dx − ε · Cp
∫
Ω
|Du|2 dx − 1
4ε||f ||2L2(Ω)
Now, choose ε = 14Cp
,
=⇒ F (u) ≥ 1
4
∫
Ω
|Du|2 dx︸ ︷︷ ︸
≥0
− Cp
∫
Ω
|f |2 dx︸ ︷︷ ︸
fixed number and
finite since f ∈L2(Ω)
≥ C > −∞
Recall:
Goal: Want to solve −∆u = f by variational methods, i.e., we need to prove the existence
of a minimizer of the functional
F (u) =
∫
Ω
(1
2|Du|2 − f u
)dx.
We can do this via the direct method of the calculus of variations. This entails
• Showing F is bounded from below.
• choosing a minimizing sequence xn.
• Show that this minimizing sequence is bounded, so we can use weak compactness of
bounded sets to show xn x .
• Prove the lower semicontinuity of F
M ≤ F (X) ≤ limn→∞
inf F (xn) = M
So we start where we left off:
(2)
1
4
∫
Ω
|Du|2 dx −Cp ·∫
Ω
|f |2 dx︸ ︷︷ ︸−Cp·||f ||2
L2(Ω)
≤ F (u) (12.1)
=⇒ F (u) ≥ −Cp · ||f ||2L2(Ω) > −∞=⇒ F is bounded from below.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.1 Direct Method of Calculus of Variations365
(3) Define
M = infu∈H1
0(Ω)F (u) > −∞.
Choose un ∈ H10(Ω) such that F (un)→ M as n →∞
(4) Boundedness of un: choose n ≥ n0 large enough such that F (un) ≤ M + 1 for
n ≥ n0. (12.1) tells us that
1
4
∫
Ω
|Dun|2 dx ≤ Cp
∫
Ω
|f |2 dx + F (un)
≤ Cp
∫
Ω
|f |2 dx +M + 1 for n ≥ n0
——
Poincare: ||un||2L2(Ω) ≤ Cp||Dun||2L2(Ω)
——
=⇒ un is bounded in H10(Ω).
=⇒ weak compactness: ∃ a subsequence unj such that unj u in H10(Ω) as j →∞,
and
compact Sobolev embedding: ∃ a subsequence of unj such that unjk → u in L2(Ω).
Call ukk∈N this subsequence.
(5) weak lower semicontinuity: Need
F (u) ≤ limk→∞
inf F (uk)
= limk→∞
inf
∫
Ω
(1
2|Duk |2 − f uk
)dx
Now since uk → u in L2(Ω), we know∫
Ω
f uk dx →∫
Ω
f u dx.
A question remains. Does the following relation hold?
1
2
∫
Ω
|Du|2 dx ≤ limk→∞
inf1
2
∫
Ω
|Duk |2 dx
To see this we utilize the following:
Note:
1
2|p|2 =
1
2|q|2 + (q, p − q) +
1
2|p − q|2
= −1
2|q|2 + p · q
⇐⇒ 1
2|p|2 +
1
2|q|2 − p · q =
1
2|p − q|2
⇐⇒ 1
2[|p|2 − 2p · q + |q|2] =
1
2|p − q|2
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
366Chapter 12: Variational Methods
Choose p = Duk and q = Du.
∫
Ω
1
2|Duk |2 dx =
∫
Ω
[1
2|Du|2 + (Du,Duk −Du) +
1
2|Duk −Du|2︸ ︷︷ ︸
≥0
]dx
≥∫
Ω
1
2|Du|2 dx +
∫
Ω
Du · (Duk −Du)︸ ︷︷ ︸0 in L2(Ω)
dx
︸ ︷︷ ︸→0 as k→0
=⇒ limk→∞
inf
∫
Ω
1
2|Duk |2 dx ≥
∫
Ω
1
2|Du|2 dx + lim
k→∞inf
∫
Ω
Du · (Duk −Du) dx
=
∫
Ω
1
2|Du|2 dx
——
Trick: The inequality
1
2|p|2 ≥ 1
2|q|2 + (q, p − q)
allows us to replace the quadratic term in Duk by a term that is linear in Duk .
——
=⇒ u is a minimizer!
(6) Uniqueness: Use again that
1
2|p|2 =
1
2|q|2 + (q, p − q) +
1
2|p − q|2
Suppose we have two minimizers u, v . Define w = 12 (u+ v). Now we choose p = Du,
q = 12 (Du +Dv), p − q = Du − 1
2 (Du +Dv) = 12 (Du −Dv).
=⇒∫
Ω
1
2|Du|2 dx =
∫
Ω
1
2|Dw |2 +
1
2
∫
Ω
Dw · (Du −Dv) dx +1
2
∣∣∣∣1
2(Du −Dv)
∣∣∣∣2dx
We apply the same inequality with p = Dv , q = Dw and p−q = Dv − 12 (Du+Dv) =
12 (Dv −Du).
=⇒∫
Ω
1
2|Dv |2 dx =
∫
Ω
1
2|Dw |2 +
1
2Dw · (Dv −Du) +
1
2
∣∣∣∣1
2(Dv −Du)
∣∣∣∣2dx
Now we sum these inequalities.
1
2
∫
Ω
|Du|2 dx +1
2
∫
Ω
|Dv |2 dx =
∫
Ω
|Dw |2 dx +1
4
∫
Ω
|Du −Dv |2 dx.
Now we remember that u and v are minimizers of
F (u) =
∫
Ω
(1
2|Du|2 − f u
)dx.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.1 Direct Method of Calculus of Variations367
So, we calculate∫
Ω
(1
2|Du|2 − f u
)dx +
∫
Ω
1
2(|Dv |2 − f v) dx
=
∫
Ω
|Dw |2 − (u + v)f︸ ︷︷ ︸=2wf
dx +1
4
∫
Ω
|Du −Dv |2 dx
= 2
∫
Ω
1
2|Dw |2 − wf dx +
1
4
∫
Ω
|Du −Dv |2 dx
=⇒ M +M ≥ 2M +1
4
∫
Ω
|Du −Dv |2 dx
=⇒ 0 ≥ 1
4
∫
Ω
|Du −Dv |2 dx
=⇒ Du = Dv , u, v ∈ H10(Ω) =⇒ u = v =⇒ uniqueness!
Recall: To solve−∆u = f in Ω
u = 0 on ∂Ω,
we looked for a minimizer in H10(Ω) of the functional
F (u) =
∫
Ω
(1
2|Du|2 − f u
)dx
Already done: Existence and uniqueness of minimizer.
(7) Euler-Lagrange equation: We have the following necessary condition. If v ∈ H10(Ω),
ε 7→ F (u + εv), then this function has a minimum at ε = 0 and
d
dε
∣∣∣∣∣ε=0
F (u + εv) = 0.
So for our Dirichlet integral we have
d
dε
∣∣∣∣∣ε=0
∫
Ω
[1
2|Du + ε ·Dv |2 − f (u + εv)
]dx =
∫
Ω
(Du ·Dv − f v) dx = 0
i.e.
∫
Ω
(Du ·Dv − f v) dx = 0 ∀v ∈ H10(Ω)
i.e., u is a weak solution of Laplace’s equation.
Remarks:
1) Laplace’s equation has a variational structure, i.e., its weak formulation is the Euler-
Lagrange equation of a variational problem.
2) f : R→ R is lower semicontinuous if xn → x and
f (x) ≤ limn→∞
inf f (xn).
f continuous =⇒ f lower semicontinuous.
If f discontinuous at x , we have the following situations:
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
368Chapter 12: Variational Methods
x
lower semicontinuous upper semicontinuous
x
3) Weak convergence is not compatible with non-linear functions: if fn f in L2(Ω),
and if g is a non-linear functions, then g(fn) g(f ).
Example: fn = sin(nx) on (0, π), fn 0 in L2((0, π)).
Need:
∫ π
0
fng dx → 0 ∀g ∈ L2((0, π)).
It will be sufficient to show this for φ ∈ C∞c ((0, π)).
∫ π
0
sin(nx)φ(x) dx =
∫ π
0
d
dx
(−1
ncos(nx)
)φ(x) dx
=
∫ π
0
cos(nx)︸ ︷︷ ︸|·|≤1
φ′(x) dx
≤ 1
n·max |φ′|
∫ π
0
dx
=π
n·max |φ′| → 0 as n →∞
Now choose g(x) = x2, g(fn) g(f )? NO
limn→∞
∫ π
0
sin2(nx)φ(x) dx = limn→∞
∫ π
0
d
dx
[x
2− 1
4nsin(2nx)
]φ(x) dx
= limn→∞
∫ π
0
1
2φ(x) dx + lim
n→∞1
4n
∫ π
0
sin(2nx)φ′(x) dx
︸ ︷︷ ︸=0
=
∫ π
0
1
2φ(x) dx.
So, sin2(nx) 12 in L2((0, π))
So, we need lower semicontinuity since F (u) will in general not be linear (for our
Dirichlet integral |Du|2 is not linear), and thus passing a weak limit within F (u) isn’t
justified. Lower semicontinuity saves this by trapping the weak sequence as follows
infu∈H1
0(Ω)F (u) ≤ F (u) ≤ lim
k→∞inf F (uk)→ inf
v∈H10(Ω)
F (v)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.1 Direct Method of Calculus of Variations369
=⇒ F (u) = infv∈H1
0(Ω)F (v)
4) Crucial was
1
2|p|2 ≥ 1
2|q|2 + (q, p − q)
Define F (p) = 12 |p|2 =⇒ F ′(p) = p.
We need: F (p) ≥ F (q) + (F ′(q), p − q) with q fixed, p ∈ Rn, i.e., F is convex.
Proposition 12.1.2 (): i.) Let f be in C1(Ω), then f is convex, i.e., f (λp+ (1− λ)q) ≤λf (p) + (1− λ)f (q) for all p, q ∈ Rn and λ ∈ [0, 1] ⇐⇒
f (p) ≥ f (q) + (f ′(q), p − q) ∀p, q ∈ Rn
ii.) If f ∈ C2(Ω), then f is convex ⇐⇒ D2f is positive semidefinite, i.e.,
n∑
i ,j=1
Di j f (p)ξiξj ≥ 0, ∀p ∈ Rn,∀ξ ∈ Rn
We needed for uniqueness
1
2|p|2 =
1
2|q|2 + (q, p − q) +
1
2|p − q|2.
The argument would work if
1
2|p|2 ≥ 1
2|q|2 + (q, p − q) + λ|p − q|2 λ > 0.
Now take F (p) = 12 |p|2.
F (p) = F (q) + (F ′(q), p − q) +1
2(F ′′(q)(p − q), p − q)
We know F ′(p) = p and F ′′(p) = I. So,
(F ′′(q)(p − q), (p − q)) = |p − q|2.For general F , this argument works if D2F (q) is uniformly positive definite, i.e., ∃λ > 0 such
that
(D2F (q)ξ, ξ) ≥ λ|ξ|2 ∀q ∈ Rn,∀ξ ∈ Rn (12.2)
Definition 12.1.3: f ∈ C2(Ω) is uniformly convex if (12.2) holds.
Theorem 12.1.4 (): Suppose that f ∈ C2(Ω) and f is uniformly convex, i.e., there exists
constants α > 0, β ≥ 0 such that
f (p) ≥ α|p|2 − β,then the variational problem:
minimize F (u) in H10(Ω)
with
F (u) =
∫
Ω
(f (Du)− gu) dx
has a unique minimizer which solves the Euler-Lagrange equation −div (f ′(Du)) = g in Ω
and u = 0 on ∂Ω.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
370Chapter 12: Variational Methods
Proof:
• F (u) bounded from below since
F (u) =
∫
Ω
(f (Du)− gu) dx
≥∫
Ω
(α|Du|2 − β − gu) dx
≥ α
2
∫
Ω
|Du|2 dx − C > −∞
by Holder, Young and Poincare inequalities.
• weak lower semicontinuity since f is convex.
• uniqueness since f uniformly convex.
• Euler-Lagrange equations:
d
dε
∣∣∣∣∣ε=0
∫
Ω
f (Du + ε ·Dv)− g(u + εv) dx
=
∫
Ω
(∂f
∂pi(Du) · ∂v
∂xi− gv
)dx
=
∫
Ω
(−div f ′(Du)− g
)v dx ∀v ∈ H1
0(Ω)
Recall: Given a functional
F (u) =
∫
Ω
(1
2|Du|2 − f u
)dx,
we have shown the following:
• If we have weak lower semicontinuity AND
• convexity, we have existence of a minimizer.
• If we have uniform convexity, we get uniqueness of a minimizer.
i.e., given
F (u) =
∫
Ω
F(Du, u, x) dx,
p 7→ F(p, z, x) is convex.
Euler-Lagrange equation:
d
dε
∣∣∣∣∣ε=0
F (u + εv) = 0 ∀v ∈ H10(Ω)
We calculate
d
dε
∣∣∣∣∣ε=0
F (u + εv) =d
dε
∣∣∣∣∣ε=0
∫
Ω
F(Du + ε ·Dv, u + εv , x) dx
=
∫
Ω
n∑
j=1
∂F∂pj
(Du, u, x)∂v
∂xj+∂F∂z
(Du, u, x)
v dx
∣∣∣∣∣ε=0
= 0 ∀v ∈ H10(Ω)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.2 Integral Constraints371
(This last equality is the weak formulation of our original PDE.)
⇐⇒∫
Ω
n∑
j=1
− ∂
∂xj
∂
∂pj(Du, u, x) · v +
∂F∂z
(Du, u, x) · v
= 0
⇐⇒∫
Ω
[−divDpF(Du, u, x) +
∂F∂z
(Du, u, x)
]v = 0
⇐⇒ −divDpF (Du, u, x) +∂F∂z
(Du, u, x) = 0
This last line represents the strong formulation of our PDE.
If a PDE has variational form, i.e., it is the Euler-Lagrange equation of a variational problem,
then we can prove the existence of a weak solution by proving that there exists a minimizer.
12.2 Integral Constraints
In this section we shall expand upon our usage of the calculus of variations to include problems
with integral constraints. Before going into this, a digression will be made to prove the implicit
function theorem, which will subsequently be used to prove the resulting Euler-Lagrange
equation that arises from problems with constraints.
12.2.1 The Implicit Function Theorem
[Bredon] In this subsection we will prove the implicit function theorem with the goal of giving
the reader a better general intuition regarding this theorem and its use. As this section
is self-contained, it may be skipped if the reader feels they are already familiar with this
theorem.
Theorem 12.2.1 (The Mean Value Theorem): Let f ∈ C1(Rn) with x, x ∈ Rn, then
f (x)− f (x) =
n∑
i=1
∂f
∂xi(x)(xi − x i)
Proof: Apply the Mean Value Theorem to the function g(t) = f (tx + (1− t)x) and use
the Chain Rule:
dg
dt
∣∣∣∣t=t0
=
n∑
i=1
∂f
∂xi(xd(txi + (1− t)x i)
dt
∣∣∣∣t=t0
=
n∑
i=1
∂f
∂xi(x)(xi − x i),
where x = t0x + (1− t0)x .
Corollary 12.2.2 (): Let f ∈ C1(Rk × Rm) with x ∈ Rk and y , y ∈ Rm, then
f (x, y)− f (x, y) =
m∑
i=1
∂f
∂yi(x, y)(yi − y i)
for some y on the line segment between y and y .
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
372Chapter 12: Variational Methods
Lemma 12.2.3 (): Let ξ ∈ Rn and η ∈ Rm be given, and let f ∈ C1(Rn×Rm;Rm). Assume
that f (ξ, η) = η and that
∂fi∂yj
∣∣∣∣x=ξ,y=η
= 0, ∀i , j,
where x and y are the coordinates of Rn and Rm respectively. Then there exists num-
bers a > 0 and b > 0 such that there exists a unique function φ : A → B, where
A =x ∈ Rn
∣∣||x − ξ|| ≤ a
and B =y ∈ Rm
∣∣||y − η|| ≤ b
, such that φ(ξ) = η and
φ(x) = f (x, φ(x)) for all x ∈ A. Moreover, φ is continuous.
Proof: WLOG, assume ξ = η = 0. Applying Corollary 12.6 to each coordinate of f and
using the assumption that ∂fi/∂yj = 0 at the origin, and hence small in a neighborhood of
0, we can find a > 0 and b > 0 such that for any given constant 0 < K < 1,
||f (x, y)− f (x, y)|| ≤ K||y − y || (12.3)
for ||x || < a, ||y || < b, and ||y || ≤ b. Moreover, we can take a to be even smaller so that
we also have for all ||x || ≤ a,
Kb + ||f (x, 0)|| ≤ b (12.4)
Now consider the set F of all functions φ : A → B with φ(0) = 0. Define the norm on F
to be ||φ − ψ||F
= sup||φ(x)− ψ(x)||
∣∣x ∈ A
. This clearly generates a complete metric
space. Now define T : F → F by putting (Tφ)(x) = f (x, φ(x)). To prove the theorem, we
must check
i) (Tφ)(0) = 0 and
ii) (Tφ)(x) ∈ B for x ∈ A.
For i), we see (Tφ)(0) = f (0, φ(0)) = f (0, 0) = 0. Next, we calculate
||(Tφ)(x)|| = ||f (x, φ(x))|| ≤ ||f (x, φ(x))− f (x, 0)||+ ||f (x, 0)||≤ K · ||φ(x)||+ ||f (x, 0)|| by (12.3)
≤ Kb + ||f (x, 0) ≤ b by (12.4),
which proves ii). Now, we claim T is a contraction by computing
||Tφ− Tψ||F
= supx∈A
(||(Tφ)(x)− (Tψ)(x)||)
= supx∈A
(||f (x, φ(x))− f (x, ψ(x))||)
≤ supx∈A
θ||φ(x)− ψ(x)|| by (12.3)
= θ · ||φ− ψ||F.
Thus by the Banach fixed point theorem (Theorem 8.3), we see that there is a unique
φ : A → B with φ(0) = 0 and Tφ = φ; i.e. f (x, φ(x)) = φ(x) for all x ∈ A. Theorem 8.3
also states that φ = limi→∞ φi , where φ0 is arbitrary and φi+1 = Tφi . Put φ0(x) ≡ 0. Then
φi is continuous ∀i . Thus, we see that φ is the uniform limit of continuous functions which
implies φ is itself continuous. .Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.2 Integral Constraints373
Theorem 12.2.4 (The Implicit Function Theorem): Let g ∈ C1(Rn × Rm) and let ξ ∈Rn, η ∈ Rm be given with g(ξ, η) = 0. (g only needs to be defined in a neighborhood of
(ξ, η).) Assume that the differential of the composition
Rm → Rn × Rn → Rm,y 7→ (ξ, y) 7→ g(ξ, y),
is onto at η. (This is a fancy way of saying the Jacobian with respect to the second argument
of g, Jy (g), is non-singular or det [Jy (g)] 6= 0.) Then there are numbers a > 0 and b > 0
such that there exists a unique function φ : A → B (with A, B as in Lemma 12.7), with
φ(ξ) = η, such that
g(x, φ(x)) = 0 ∀x ∈ A.(i.e. φ is “solves” or the implicit relation g(x, y) = 0). Moreover, if g is Cp, then so is φ
(including the case p =∞).
Proof: The differential referred to is the linear map L : Rm → Rm given by
Li(y) =
m∑
j=1
∂gi∂yi
(ξ, η)yj ,
That is, it is the linear map represented by the Jacobian matrix (∂gi/∂yi) at (ξ, η). The
hypothesis says that L is non-singular, and hence has a linear inverse L−1 : Rm → Rm. Let
f : Rn × Rm → Rm
be defined by
f (x, y) = y − L−1(g(x, y)).
Then f (ξ, η) = η − L−1(0) = η. Also, computer differentials at η of y 7→ f (ξ, y) gives
I − L−1L = I − I = 0.
Explicitly, this computation is as follows: Let L = (ai ,j) so that ai ,j = (∂gi/∂yj)(ξ, η) and
let L−1 = (bi ,j) so that∑bi ,kak,i = δi ,j . Then
∂fi∂yj
(ξ, η) = δi ,j −∂
∂yj
[m∑
k=1
bi ,kgk(x, y)
]
(ξ,η)
= δi ,j −m∑
k=1
bi ,k∂gk∂yj
(ξ, η)
= δi ,j −m∑
k=1
bi ,kak,j = 0.
Applying Lemma 12.7 to get a, b, and φ with φ(ξ) = η and f (x, φ(x)) = φ(x), we see that
φ(x) = f (x, φ(x)) = φ(x)− L−1(g(x, φ(x))),
which is equivalent to g(x, φ(x)) = 0.
Now we must show that φ is differentiable. Since the determinant of the Jacobian Jy (g) 6= 0
at (ξ, η), it is nonzero in a neighborhood, say A × B. To show that φ is differentiable at a
point x ∈ A, we can assume x = ξ = η = 0 WLOG. With this assumption, we apply the
Mean Value Theorem (Theorem 12.5) to g(x, y), g(0, 0) = 0:
0 = gi(x, φ(x)) = gi(x, φ(x))− gi(0, 0)
=
n∑
j=1
∂gi∂xj
(pi , qi) · xj +
m∑
k=1
∂gi∂yk
(pi , qi) · φk(x),
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
374Chapter 12: Variational Methods
where (pi , qi) is some point on the line segment between (0, 0) and (x, φ(x)). Let h(j) denote
the point (0, 0, . . . , h, 0, . . . , 0), with the h in the jth place, in Rn. Then, putting h(j) in place
of x in the above equation and dividing by h, we get
0 =∂gi∂xj
(pi , qi) +
m∑
k=1
∂gi∂yk
(pi , qi)φk(h(j))
h.
For j fixed, i = 1, . . . , m and k = 1, . . . , m these are m linear equations for the m unknowns
φk(h(j))
h=φk(h(j))− φk(0)
h
and they can be solved since the determinant of the Jacobian matrix is non-zero in A × B.
The solution (Cramer’s Rule) has a limit as h → 0. Thus, (∂φk/∂xj)(0) exists and equals
this limit.
We know that φ is differentiable (once) and in a neighborhood A×B of (ξ, η) and thus, we
can apply standard calculus to compute the derivative of the equations g(x, φ(x)) = 0. The
Chain Rule gives
0 =∂gi∂xj
(x, φ(x)) +
m∑
k=1
∂gi∂yk
(x, φ(x))φkxj
(x).
Again, these are linear equations with non-zero determinant near (ξ, η) and hence has, by
Cramer’s Rule, a solution of the form
∂φk∂xj
(x) = Fk,j(x, φ(x)),
where Fk,j is Cp−1 when g is Cp. If φ is Cr for r < p, then the RHS of this equation is also
Cr . Thus, the LHS ∂φk/∂xj is Cr and hence the φk are Cr+1. By induction, the φk are Cp.
Consequently, φ is Cp when g is Cp, as claimed.
12.2.2 Nonlinear Eigenvalue Problems
[Evans] Now we are ready to investigate problems with integral constraints. As in the previous
section, we will first consider a concrete example. So, let us look at the problem of minimizing
the energy functional
I(w) =1
2
∫
Ω
|Dw |2 dx
over all functions w with, say, w = 0 on ∂Ω, but subject now also to the side condition that
J(w) =
∫
Ω
G(w) dx = 0,
where G : R→ R is a given, smooth function.
We will henceforth write g = G′. Assume now
|g(z)| ≤ C(|z |+ 1)Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.2 Integral Constraints375
and so
|G(z)| ≤ C(|z |2 + 1) (∀z ∈ R) (12.5)
for some constant C.
Now let us introduce an appropriate admissible space
A =w ∈ H1
0(Ω)∣∣J(w) = 0
.
We also make the standard assumption that the open set Ω is bounded, connected and has
a smooth boundary.
Theorem 12.2.5 (Existence of constrained minimizer): Assume the admissible space Ais nonempty. Then ∃u ∈ A satisfying
I(u) = minw∈A
I(w)
Proof: As usual choose a minimizing sequence uk∞k=1 ⊂ A with
I(uk)→ m = infw∈A
I(w).
Then as in the previous section, we can extract a weakly converging subsequence
ukj u in H10(Ω),
with I(u) ≤ m. All that we need to show now is
J(u) = 0,
which shows that u ∈ A. Utilizing the fact that H10(Ω) ⊂⊂−→ L2(Ω), we extract another
subsequence of ukj such that
ukj → u in L2(Ω),
where we have relabeled ukj to be this new subsequence. Consequently,
|J(u)| = |J(u)− J(uk)| ≤∫
Ω
|G(u)− G(uk)| dx
≤ C
∫
Ω
|u − uk |(1 + |u|+ |uk |) dx by (12.5)
→ 0 as k →∞.
A more interesting topic than the existence of the constrained minimizers is an examination
of the corresponding Euler-Lagrange equation.
Theorem 12.2.6 (Lagrange multiplier): Let u ∈ A satisfy
I(u) = minw∈a
I(w).
Then ∃λ ∈ R such that∫
Ω
Du ·Dv dx = λ
∫
Ω
g(u) · v dx (12.6)
∀v ∈ H10(Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
376Chapter 12: Variational Methods
Remark: Thus u is a weak solution of the nonlinear BVP−∆u = λg(u) in Ω
u = 0 on ∂Ω,(12.7)
where λ is the Lagrange Multiplier corresponding to the integral constraint
J(u) = 0.
A problem of the form (12.7) for the unknowns (u, λ), with u/≡ 0, is a non-linear eigenvalue
problem.
Proof of 12.10:
Step 1) Fix any function v ∈ H10(Ω). Assume first
g(u) 6= 0 a.e. in Ω. (12.8)
Choose then any function w ∈ H10(Ω) with
∫
Ω
g(u) · w dx 6= 0; (12.9)
this is possible because of (12.8). Now write
j(τ, σ) = J(u + τv + σw)
=
∫
Ω
G(u + τv + σw) dx (τ, σ ∈ R).
Clearly
j(0, 0) =
∫
Ω
G(u) dx = 0.
In addition, j is C1 and
∂j
∂τ(τ, σ) =
∫
Ω
g(u + τv + σw) · v dx, (12.10)
∂j
∂σ(τ, σ) =
∫
Ω
g(u + τv + σw) · w dx, (12.11)
Consequently, (12.9) implies
∂j
∂σ(0, 0) 6= 0.
According to the Implicit Function Theorem, ∃φ ∈ C1(R) such that
φ(0) = 0
and
j(τ, φ(τ)) = 0 (12.12)
for all sufficiently small τ , say |τ | ≤ τ0. Differentiating, we discover
∂j
∂τ(τ, φ(τ)) ∂j
∂σ(τ, φ(τ))φ′(τ) = 0;
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.2 Integral Constraints377
whence (12.10) and (12.11) yield
φ′(0) = −
∫
Ω
g(u) · v dx∫
Ω
g(u) · w dx. (12.13)
Step 2) Now set
w(τ) = τv + φ(τ)w (|τ | ≤ τ0)
and write
i(τ) = I(u + w(τ)).
Since (12.12) implies J(u+w(τ)) = 0, we see that u+w(τ) ∈ A. So the C1 function
i( · ) has a minimum at 0. Thus
0 = i ′(0) =
∫
Ω
(Du + τDv + φ(τ)Dw) · (Dv + φ′(τ)Dw) dx
∣∣∣∣∣τ=0
=
∫
Ω
Du · (Dv + φ′(0)Dw) dx. (12.14)
Recall now (12.13) and define
λ =
∫
Ω
Du ·Dw dx∫
Ω
g(u) · w dx,
to deduce from (12.14) the desired equality
∫
Ω
Du ·Dv dx = λ
∫
Ω
g(u) · v dx ∀v ∈ H10(Ω).
Step 3) Suppose now instead (12.8) that
g(u) = 0 a.e. in Ω.
Approximating g by bounded functions, we deduce DG(u) = g(u) ·Du = 0 a.e. Hence,
since Ω is connected, G(u) is constant a.e. It follows that G(u) = 0 a.e., because
J(u) =
∫
Ω
G(u) dx = 0.
As u = 0 on ∂Ω in the trace sense, it follows that G(0) = 0.
But then u = 0 a.e. as otherwise I(u) > I(0) = 0. Since g(u) = 0 a.e., the identity
(12.6) is trivially valid in this case, for any λ.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
378Chapter 12: Variational Methods
12.3 Uniqueness of Solutions
Warning: In general, uniqueness of minimizers does not imply uniqueness of weak solutions
for the Euler-Lagrange equation. However, if the joint mapping (p, z) 7→ F(p, z, x) is convex
for x ∈ Ω, then uniqueness of minimizers implies uniqueness of weak solutions.
Reason: In this case, weak solutions are automatically minimizers. Suppose u is a weak
solution of
−div∂F∂p
+∂F∂z
= 0 u = g on ∂Ω
(p, z) 7→ F(p, z, x) is convex ∀x ∈ Ω), i.e.,
F(q, y , x) ≥ F(p, z, x) +DpF(p, z, x) · (q − p) +DzF(p, z, x) · (y − z)
(the graph of F lies above the tangent plane).
Suppose that u = g on ∂Ω. Choose p = Du, z = u, q = Dw and y = w .
F(Dw,w, x) ≥ F(du, u, x) +DpF(Du, u, x) · (Dw −Du) +DzF(Du, u, x) · (w − u)
Integrate in Ω:
∫
Ω
F(Dw,w, x) dx ≥∫
Ω
F(Du, u, x) dx
+
∫
Ω
[DpF(Du, u, x) · (Dw −Du) +DzF(Du, u, x) · (w − u)] dx
︸ ︷︷ ︸= 0 since w − v = 0 on ∂Ω and u is a
weak sol. of the Euler Lagrange eq.
=⇒ F (w) =
∫
Ω
F(Dw,w, x) dx ≥∫
Ω
F(Du, u, x) dx = F (u)
12.4 Energies
Ideas: Multiply by u, ut , integrate in space/space-time and try to find non-negative quantities.
Typical terms in energies:
∫
Ω
|Du|2 dx,∫
Ω
|u|2 dx,∫
Ω
|ut |2 dx.
If you take dEdt , you should find a term that allows you to use the original PDE:
d
dt
1
2
∫
Ω
|ut(x, t)|2 dx =
∫
Ω
ut(x, t) · utt(x, t) dx
is good for hperbolic equations, and
d
dt
1
2
∫
Ω
|u|2 dx =
∫
Ω
u(x, t) · ut(x, t) dx
is good for parabolic equations.Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.5 Weak Formulations379
12.5 Weak Formulations
Typically: integral identities
• test functions
• contain less derivatives than the original PDE in the strong form.
• multiply strong form by a test function and integrate by parts
Examples:
1) −∆u = f in Ω, u = 0 on ∂Ω
∫
Ω
−(∆u) dx =
∫
Ω
f v dx ⇐⇒∫
Ω
Du ·Dv dx =
∫
Ω
f v dx ∀v ∈ H10(Ω).
So, we should have:
Weak formulation + Regularity =⇒ Strong formulation, e.g.,
u ∈ H10(Ω) ∩H2(Ω) + weak solution =⇒ −∆u = f in Ω.
– Dirichlet conditions: Enforced in the sense that we require u to have correct
boundary data, e.g.,
−∆u = f in Ω
u = g on ∂Ω
seek u ∈ H1(Ω) with u = g on ∂Ω.
– Neumann conditions: Natural boundary conditions, they should arise as a conse-
quence of the weak formulations.
Weak formulation:
∫
Ω
Du ·Dv dx =
∫
Ω
f v dx ∀v ∈ H1(Ω).
Now suppose u ∈ H10(Ω) ⊂ H1(Ω) =⇒ −∆u = f in Ω.
Now take u ∈ H1(Ω)
∫
Ω
Du ·Dv dx =
∫
Ω
f v dx.
Integrate by parts:
−∫
Ω
∆u · v dx +
∫
∂Ω
(Du · ν)v dS =
∫
Ω
f v dx
=⇒∫
∂Ω
(Du · ν)v dS = 0 ∀v ∈ H1(Ω)
=⇒ Du · ν = 0 on ∂Ω
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
380Chapter 12: Variational Methods
Γ0Ω
Γ1
−∆u = f in Ωu = 0 on Γ0
Du · ν = g on Γ1
2) Now we consider a problem with split BCs.
Ω smooth, bounded, connected, Γ0 and Γ1 have positive surface measure and ∂Ω =
Γ0 ∪ Γ1.
Define H(Ω) = φ ∈ H1(Ω), s.t. φ = 0 on Γ0.∫
Ω
−∆u · φ dx =
∫
Ω
f φ dx ∀φ ∈ H(Ω)
‖∫
Ω
Du ·Dφ dx −∫
∂Ω=Γ0∪Γ1
(Du · ν) φ︸︷︷︸= 0
on Γ0
dS
=
∫
Ω
Du ·Dφ dx −∫
Γ1
gφ dS
So for the weak formulation, we seek u ∈ H(Ω) such that∫
Ω
Du ·Dφ dx =
∫
Γ1
gφ dS +
∫
Ω
f φ dx ∀φ ∈ H(Ω)
Check: Weak solution + Regularity =⇒ Strong solution.
Dirichlet data okay, u ∈ H(Ω) =⇒ u = 0 on Γ0.∫
Ω
Du ·Dφ dx = −∫
Ω
∆u · φ dx +
∫
∂Ω
(Du · ν)φ dS
=
∫
Γ1
gφ dS +
∫
Ω
f φ dx ∀φ ∈ H(Ω)
Now, taking φ ∈ H10(Ω) ⊂ H(Ω), we see∫
Ω
(−∆u − f )φ dx = 0 ∀φ ∈ H10(Ω)
=⇒ −∆u − f = 0 in Ω ⇐⇒ −∆u = f =⇒ PDE Holds.
The volume terms cancel, and we get∫
∂Ω
(Du · ν)φ dS =
∫
Γ1
gφ dS ∀φ ∈ H(Ω)
‖∫
Γ1
(Du · ν)φ dS =
∫
Γ1
gφ dS =⇒ Du · ν = g.
Where we have thus recovered the Neumann BC.Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.6 A General Existence Theorem381
12.6 A General Existence Theorem
Goal: Existence of a minimizer for
F (u) =
∫
Ω
L(Du, u, x) dx
with L : Rn × R×Ω→ R smooth and bounded from below.
Note:
L(p, z, x) =1
2|p|2 − f (x) · z f ∈ L2(Ω)
doesn’t fit into this framework.
Theorem 12.6.1 (): Suppose that p 7→ L(p, z, x) is convex. Then F (·) is sequentially
weakly lower semicontinuous on W 1,p(Ω), 1 < p <∞.
Idea of Proof: Have to show: uk u in W 1,p(Ω), then lim infk→∞ F (uk) ≥ F (u). True
if lim infk→∞ F (uk) =∞.
Suppose M = lim infk→∞ F (uk) <∞, and that lim inf = lim (subsequence, not relabeled).
uk u in W 1,p(Ω) =⇒ ||uk ||W 1,p(Ω) bounded
Compact Sobolev embedding: without loss of generality uk → u in Lp(Ω) =⇒ convergence
in measure =⇒ further subsequence: uk → u pointwise a.e.
Egoroff: exists Eε such that uk → u uniformly on Eε, |Ω \ Eε| < ε.
Define Fε by
Fε =
x ∈ Ω : |u(x)|+ |Du(x)| ≤ 1
ε
Claim: |Ω \ Fε| → 0. Fix K(
= 1ε
). Then,
||u(x)| > K| ≤ 1
K
∫
Ω
|u(x)| dx ≤ 1
K
∫
Ω
|u(x)| dx
Holder≤ 1
K
(∫
Ω
1 dx
) 1p′(∫
Ω
|u(x)|p dx) 1
p
≤ C
K
Define good points Gε = Fε ∩ Eε, then |Ω \ Gε| → 0 as ε→ 0.
By assumption, L bounded from below =⇒ may assume that L ≥ 0.
F (uk) =
∫
Ω
L(Duk , uk , x) dx
≥∫
Gε
L(Duk , uk , x) dx
(by convex. of L) ≥∫
Gε
L(Du, uk , x) dx
︸ ︷︷ ︸:=Ik
+
∫
Gε
DpL(Du, uk , x) · (Duk −Du) dx
︸ ︷︷ ︸:=Jk
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
382Chapter 12: Variational Methods
Since L(q, z, x) ≥ L(p, z, x) + Lp(p, z, x) · (p − q). Now
Ik →∫
Gε
L(Du, u, x) dx uniformly
and
Jk =
∫
Gε
(DpL(Du, uk , x)−DpL(Du, u, x))︸ ︷︷ ︸→0 uniformly
·(Duk −Du) dx
︸ ︷︷ ︸→0
+
∫
Gε
DpL(Du, u, x) · (Duk −Du) dx
︸ ︷︷ ︸→0
.
So, we have
limk→∞
inf F (uk) ≥∫
Gε
L(Du, u, x) dx.
Now, we use monotone convergence for ε 0 to get
limk→∞
inf F (uk) ≥∫
Ω
L(Du, u, x) dx = F (u)
Recall the assumption:
F (u) =
∫
Ω
L(Du, u, x) dx, p 7→ L(p, z, x) is convex.
Theorem 12.6.2 (): Suppose that L satisfies the coercivity assumption L(p, z, x) ≥ α|p|q−β with α > 0, β ≥ 0 and 1 < q <∞. Suppose in addition that L is convex in p for all z , x .
Also, suppose that the class of admissable functions
A = w ∈ W 1,q(Ω), w(x) = g(x) on ∂Ω
is not empty. Lastly, assume that g ∈ W 1,q(Ω). If these conditions are met, then there
exists a minimizer of the variational problem:
Minimize F (u) =
∫
Ω
L(Du, u, x) dx in A
Remark: It’s reasonable to assume that there exists a w ∈ A with F (w) < ∞. Typically
one assumes in addition that L satisfies a growth condition of the form
L(p, z, x) ≤ C(|p|q + 1)
This ensures that F is finite on A.
Idea of Proof: If F (w) = +∞ ∀w ∈ A, then there is nothing to show. Thus, suppose
F (w) <∞ for at least on w ∈ A.
Given L(p, z, x) ≥ α|p|q − β, we see that if F (w) <∞, then
∫
Ω
L(Du, u, x) dx ≥∫
Ω
(α|Du|q − β) dx
≥ α
∫
Ω
|Du|q dx − β · |Ω| (12.15)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.6 A General Existence Theorem383
=⇒ F (w) ≥ −β · |Ω| for w ∈ A =⇒ F is bounded from below:
−∞ < M = infw∈A
F (w) <∞.
Choose a minimizing sequence wk such that F (wk)→ M as k →∞. Then by (12.15):
α
∫
Ω
|Dwk |q dx ≤ F (wk) + β · |Ω| ≤ M + 1 + β · |Ω|
if k is large enough.
Now since wk ∈ A, we know wk = g on ∂Ω. Thus,
||wk ||Lq(Ω) = ||wk − g + g||Lq(Ω)
≤ ||wk − g||Lq(Ω) + ||g||Lq(Ω)
Poincare≤ Cp||Dwk −Dg||Lq(Ω) + ||g||Lq(Ω)
≤ Cp||Dwk ||Lq(Ω) + Cp||Dg||Lq(Ω) + ||g||Lq(Ω)
≤ C <∞ ∀k
=⇒ wk is a bounded sequence in W 1,q(Ω)
=⇒ wk contains a weakly converging subsequence (not relabeled), wk w in W 1,q(Ω).
Weak lower semicontinuity result: Theorem 12.5 applies and
M ≤ F (w) ≤ limk→∞
inf F (wk)→ M = infv∈A
F (v).
If w ∈ A then
F (w) = M = infv∈A
F (v)
=⇒ w is a minimizer.
To show this, we apply the following theorem:
Theorem 12.6.3 (Mazur’s Theorem): X is a BS, and suppose that xn x in X. Then for
ε > 0 given, there exists a convex combination∑kj=1 λjxj of elements xj ∈ xnn∈N, λj ≥ 0,∑k
j=1 λj = 1, such that
∣∣∣∣∣∣
∣∣∣∣∣∣x −
k∑
j=1
λjxj
∣∣∣∣∣∣
∣∣∣∣∣∣< ε.
In particular, there exists a sequence of convex combinations that converges strongly to x .
Proof: [Yoshida] Consider the totality MT of all convex combinations of xn. We can
assume 0 ∈ MT by replacing x with x − x1 and xj with xj − x1.
We go for a proof by contradiction. Suppose the contrary of the theorem, i.e., ||x −u||X > ε
for all u ∈ MT . Now consider the set,
M =v ∈ X; ||v − u||X ≤
ε
2for some u ∈ MT
,
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
384Chapter 12: Variational Methods
which is a convex (balanced and also absorbing) neighborhood of 0 in X and ||x−v ||X > ε/2
for all v ∈ M.
Recall the Minkowski functional pA for given set A:
pA(x) = infα>0, α−1x∈A
α.
Now, since x /∈ M, there exists u0 ∈ M such that x = β−1u0 with 0 < β < 1. Thus,
pM(x)β−1 > 1.
Consider now the real linear subspace of X:
X1 = x ∈ X; x = γ · u0, −∞ < γ <∞
and define f1(x) = γ for x = γ · u0 ∈ X1. This real linear functional f1 on X1 satisfies
f1(x) ≤ pM(x). Thus, by the Hahn-Banach Extension theorem, there exists a real linear
extension f of f1 defined on the real linear space X and such that f (x) ≤ pM(x) on X. Since
M is a neighborhood of 0, the Minkowski functional pM(x) is continuous in x . From this we
see that f must also be a continuous real linear functional defined on X. Thus, we see that
supx∈MT
f (x) ≤ supx∈M
f (x) ≤ supx∈M
pM(x) = 1 < β−1 = f (β−1u0) = f (x).
Clearly we see that x can not be a weak accumulation point of M1, this is a contradiction
since we assumed that xn x .
Corollary 12.6.4 (): If Y is a strongly closed, convex set in a BS X, then Y is weakly closed.
Proof: xn x =⇒ exists a sequence of convex combinations yk ∈ Y such that yk → x .
Y closed =⇒ x ∈ Y .
Application to proof of 12.6: A is a convex set. We see that A is closed by the following
argument. If wk → w in W 1,q(Ω), then by the Trace theorem, we have
||wk − w ||Lq(∂Ω) ≤ CTr||wk − w ||W 1,q(Ω) → 0
since wk = g on ∂Ω =⇒ w = g on ∂Ω.
So, we apply Mazur’s Theorem, to see that A is also weakly closed, i.e., if wk w in
W 1,q(Ω), then w ∈ A. Thus, this establishes existence by the arguments stated earlier in
the proof.
Uniqueness of this minimizer is true if L is uniformly convex (see Evans for proof).
12.7 A Few Applications to Semilinear Equations
Here we want to show how the technique used in the linear case extends to some nonlinear
cases. First we show that “lower-order terms” are irrelevant for the regulairty. For example
consider∫
Ω
Du ·Dφdx =
∫
Ω
a(u) ·Dφdx ∀φ ∈ H10(Ω) (12.16)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.7 A Few Applications to Semilinear Equations385
If
∫
BR(x0)
Dv ·Dφdx = 0 for ∀φ ∈ H10(Ω) and v = u on ∂BR then
∫
Bρ(x0)
|Du|2 dx ≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|D(u − v)|2 dx
≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|a(u)|2 dx.
Now if |a(u)| ≤ |u|α, then the last integral is estimated by
∫
BR(x0)
|a(u)|2 dx ≤(∫
)BR(x0)|u|2n/(n−2) dx
)α(1−2/n)
Rn(1−α(1−2/n))
so that u is Holder continuous provided α < 2n−2 .
The same result holds if we have instead of (12.16)∫
Ω
Du ·Dφdx =
∫
Ω
a(u) ·Dφdx +
∫
Ω
a(Du) · φdx
with a(Du) ≤ |Du|1−2/n or a(Du) ≤ |Du|2−ε and |u| ≤ M = const.
The above idea works also if we study small perturbations of a system with constant co-
efficients:
For∫
Ω
Aαβij (x)DαuiDβφ
j dx = 0
with
Aαβij ∈ L∞(Ω) and∣∣∣Aαβij (v)− δαβδi j
∣∣∣ ≤ ε0
one finds∫
Bρ(x0)
|Du|2 dx ≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx + C
∫
BR(x0)
|δ1δ2 − A|︸ ︷︷ ︸≤ε0
·Du2 dx
because∫
Ω
DαuiDβφ
j dx =
∫
Ω
(δαβδi j − Aαβij )DαuiDβφ
j dx.
So we can apply the lemma from the very begining of 9.8 and conclude that Du is Holder
continuous. Finally we mention the
Theorem 12.7.1 (): Let u ∈ H1,2(Ω,RN) be a minimizer of∫
Ω
F (Du) dx
and let us assume that
F (tp)
t2→ |p|2 for t →∞
then u ∈ C0,µ(Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
386Chapter 12: Variational Methods
Proof: We consider the problem
∫
BR(x0)
|DH|2 dx → minimize, H = 0 on ∂BR
(H stands for harmonic function!) then
∫
Bρ(x0)
|Du|2 dx ≤ C( ρR
)n ∫
BR(x0)
|Du|2 dx +
∫
BR(x0)
|Du −DH|2 dx.
Now∫
BR(x0)
D(u −H) ·D(u −H) dx =
∫
BR(x0)
Du ·D(u −H) dx −∫
BR(x0)
DH ·D(u −H) dx
=
∫
BR(x0)
|Du|2 dx −∫
BR(x0)
Du ·DH dx
=
∫
BR(x0)
|Du|2 dx −∫
BR(x0)
(D(u −H) +DH)DH dx
=
∫
BR(x0)
|Du|2 dx −∫
BR(x0)
|DH|2 dx
=
∫
BR(x0)
|Du|2 dx +
∫
BR(x0)
F (Du) dx −∫
BR(x0)
F (DH) dx
︸ ︷︷ ︸≤0(u is a minimizer)
−∫
BR(x0)
F (Du) dx +
∫
BR(x0)
F (DH) dx −∫
BR(x0)
|DH|2 dx.
By the growth assumption on F and because
|p|2 − F (p) = |p|2(
1− F (p)
|p|2)
we find∫
BR(x0)
|D(u −H)|2 dx ≤ ε∫
BR(x0)
|Du|2 dx + C · Rn.
So we can apply the lemma at the beginning of the section and conclude that Du is Holder
continuous.
12.8 Regularity
We discuss in this section the smoothness of minimizers to our energy functionals. This is
generally a quite difficult topic, and so we will make a number of strong simplifying assump-
tions, most notably that L depends only on p. Thus we henceforth assume our functions I[·]to have the form
I[w ] :=
∫
Ω
L(Dw)− wf dx, (12.17)
for f ∈ L2(Ω). We will also take q = 2, and suppose as well the growth condition
|DpL(p)| ≤ C(|p|+ 1) (p ∈ Rn). (12.18)Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.8 Regularity387
Then any minimizer u ∈ A is a weak solution of the Euler-Lagrange PDE
−n∑
i=1
(Lpi (Du))xi = f in Ω; (12.19)
that is,
∫
Ω
n∑
i=1
Lpi (Du)vxi dx =
∫
Ω
f v dx (12.20)
for all v ∈ H10(Ω).
12.8.1 DeGiorgi-Nash Theorem
[Moser-Iteration Technique]
Let u be a subsolution for the elliptic operator −Dβ(aαβDα), i.e.
∫
Ω
aαβDαuDβφdx ≤ 0 ∀φ ∈ H1,20 (Ω), φ ≥ 0
where aαβ ∈ L∞(Ω) and satisfy an ellipticity-condition. The idea is to use as test-function
φ := (u+)pη2 with p > 1 and η the cut-off-function we defined on page 20. For the sake of
simplicity we assume that u ≥ 0. Then
p
∫
BR
aαβDαuDβuup−1η2 dx +
∫
BR
aαβDαuupηDη dx ≤ 0
∫
BR
|Du|2up−1η2 dx ≤ c
p
∫
BR
|Du|u p−12 u
p+12 η|Dη| dx
∫
BR
|Du|2up−1η2 dx ≤ c
p2
∫
BR
up+1|Dη|2 dx.
As
∣∣∣Dup+1
2
∣∣∣2
=
(p + 1
2
)2
up−1|Du|2
we get
∫
BR
∣∣∣Dup+1
2
∣∣∣2η2 dx ≤ c
(p + 1
p
)2 ∫
BR
up+1|Dη|2 dx
and together with
∣∣∣D(ηup+1
2 )∣∣∣2≤ c
∣∣∣Dup+1
2
∣∣∣2η2 + up+1|Dη|2
follows
∫
BR
∣∣∣Dup+1
2
∣∣∣2dx ≤ c
(1 +
(p + 1
2
)2)∫
BR
up+1|Dη|2 dx (p > 1)
≤ c(p + 1)2
(1 +
1
p2
)∫
BR
up+1|Dη|2 dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
388Chapter 12: Variational Methods
By Sobolev’s inequality and the properties of η we finally have:
(∫
BR
(up+1
2 η)2∗ dx
)2/2∗≤ c(p + 1)2
(1 +
1
p2
)1
(R − ρ)2
∫
BR
up+1 dx.
For p = 1, this is just Caccioppoli’s inequality. Set λ := 2 ∗ /2 = nn−2 and q := p + 1 > 2,
then(∫
BR
uλq dx
)1/λ
≤ c (1 + q)2
(R − ρ)2
∫
BR
uq dx.
Now we choose
qi := 2λi = λqi−1 (q0 = 2)
Ri :=R
2+
R
2i+1(R0 = R)
and we find that
(∫
BRi+1
uqi+1 dx
) 1
λi+1
=
(∫
BRi+1
uλqi dx
) 1λ
1
λi
≤[c(1 + qi)
2
2−2(i+1)R2
]1/λi(∫
BRi
uqi dx
)1/λi
≤i∏
k=0
[c(1 + qk)2
R24−k−1
]1/λk (∫
BR
u2 dx
)1/2
.
We have to estimate the above product:
∞∏
k=0
[c(1 + qk)2
R24−k−1
]1/λk
= exp
(ln
∞∏
k=0
[c(1 + qk)2
R24−k−1
]1/λk)
= exp
∞∑
k=0
2
λkln c
(1 + qk)
R2−k−1
= exp(c − n lnR)
(qk = 2λk ,
∞∑
k=0
λ−j =n
2
)
thus ∀k(−∫
BRk
uqk dx
)1/qk
≤ c(−∫
BR
u2 dx
)1/2
from which immediately follows that
supx∈BR/2
u(x) ≤ c(−∫
BR
|u|2 dx)1/2
.
Remark: Instead of 2, one can take any expoenent p¿0 in the above formulas.
If we start with a supersolution and u > 0, i.e.∫
Ω
aαβ(x)DαuDβφdx ≥ 0 ∀φ ∈ H1,20 (Ω), φ ≥ 0
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.8 Regularity389
then we can take p < −1. Exactly as before we then get for λ := 2∗2 > 1 and q := p+1 < 0.
(∫
Bρ
uλq dx
)1/λ
≤ c(1 + q2)
(1 +
1
(q − 1)2
)1
(R − ρ)2
∫
BR
uq dx.
As |q − 1| > 1 we have
(∫
Bρ
uλq dx
)1/λ
≤ c (1 + q)2
(R − ρ)2
∫
BR
uq dx.
Now we set Ri := R2 + R
2i+1 as before and qi := q0λi but q0 < 0. Then follows
infx∈BR/2
u(x) ≤ c(−∫
BR
uq0 dx
)1/q0
(q0 < 0).
Up to now we have the following two estimates for a positive solution u: For any p > 0 and
any q < 0 holds:
supx∈BR/2
u(x) ≤ k1
(−∫
BR
up dx
)1/p
infx∈BR/2
u(x) ≥ k2
(−∫
BR
uq dx
)1/q
,
for some constants k1 and k2, q < 0. The main point now is that the John-Nirenberg-lemma
allows us to combine the two inequalities to get
Theorem 12.8.1 (Harnack’s Inequality): If u is a solution and u > 0, then
infx∈BR/2
u(x) ≥ C supx∈BR/2
u(x).
Proof: We have∫
Ω
aαβDαuDβφdx = 0 ∀φ ∈ H1,20 (Ω) φ > 0.
As u > 0, we can choose φ := 1uη
2 and get
−∫
Ω
aαβDαuDβu1
u2η2 dx +
∫
Ω
aαβDαu1
uηDη dx = 0
thus∫
Ω
|Du|2u2
η2 dx ≤∫
Ω
|Dη|2 dx,
i.e.∫
BR
|D ln u|2 dx ≤ CRn−2
which implies by the Poincare’s inequality that ln u ∈ BMO. Now we use the characerization
of BMO by which there exists a constant α, such that for v := eα ln u, we have
∫
BR
v dx ≤ C(∫
BR
1
vdx
)−1
.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
390Chapter 12: Variational Methods
Hence (p = 1, q = −1),
infBR/2
u ≥ K2
(−∫
BR
u−a dx
)−1/α
≥ K2C−1
(−∫
BR
uα dx
)1/α
≥ K2
K1CsupBR/2
u.
Remark: It follows from Harnack’s inequality that a solution of the equation at the beginning
of this section is Holder-continuous: M(2R)− u ≥ 0 is a supersolution, thus
M(2R)−m(R) ≤ C(M(2R)−M(R))
u −m(2R) ≥ 0 is also a supersolution, thus
M(R)−m(2R) ≤ C(m(R)−m(2R))
and from this we get
ω(2R) + ω(R) ≤ C(ω(2R)− ω(R))
hence ω(R) ≤ C+1C−1ω(2R) and u is Holder-continuous.
12.8.2 Second Derivative Estimates
We now intend to show if u ∈ H1(Ω) is a weak solution of the nonlinear PDE (12.19), then
in fact u ∈ H2loc(Ω). But to establish this, we will need to strengthen our growth condition
on L. Let us first of all suppose
|D2L(p)| ≤ C (p ∈ Rn). (12.21)
In addition, let us assume that L is uniformly convex, and so there exists a constant θ > 0
such that
n∑
i ,j=1
Lpipj (p)ξiξj ≥ θ|ξ|2 (p, ξ ∈ Rn). (12.22)
Clearly, this is some sort of nonlinear analogue of our uniform ellipticity condition for linear
PDE. The idea will therefore be to try to utilize, or at least mimic, some of the calculations
from that chapter.
Theorem 12.8.2 (Second derivatives for minimizers): (i) Let u ∈ H1(Ω) be a weak
solution of the nonlinar PDE (12.19), where L statisfies (12.21), (12.22). Then
u ∈ H2loc(Ω).
(ii) If in addition u ∈ H10(Ω) and ∂Ω is C2, then
u ∈ H2(Ω),
with the estimate
||u||H2(Ω) ≤ C · ||f ||L2(Ω)
Proof:Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.8 Regularity391
1. We will largely follow the proof of blah blah, the corresponding assertion of local H2
regularity for solutions of linear second-order elliptic PDE.
Fix any open set V ⊂⊂ Ω and choose then an open set W so that V ⊂⊂ W ⊂⊂ Ω.
Select a smooth cutoff function ζ satisfyingζ ≡ 1 on V, ζ ≡ 0 in Rn \W0 ≤ ζ ≤ 1
Let |h| > 0 be small, choose k ∈ 1, . . . , n, and substitute
v := −D−hk (ζ2Dhku)
into (12.20). We are employing her the notion from the difference quotients section:
Dhku(x) =u(x + hek)− u(x)
h(x ∈ W ).
Using the identity
∫
Ω
u ·D−hk v dx = −∫
Ω
v ·Dhku dx , we deduce
n∑
i=1
∫
Ω
Dhk(Lpi (Du)) · (ζ2Dhku)xi dx = −∫
Ω
f ·D−hk (ζ2Dhku) dx. (12.23)
Now
DhkLpi (Du(x)) =Lpi (Du(x + hek))− Lpi (Du(x))
h
=1
h
∫ 1
0
d
dsLpi (s ·Du(x + hek) + (1− s) ·Du(x)) ds
(uxj (x + hek)− uxj (x))
=
n∑
j=1
ahij(x) ·Dhkuxj (x), (12.24)
for
ahij(x) :=
∫ 1
0
Lpipj (s ·Du(x + hek) + (1− s) ·Du(x)) dx (i , j = 1, . . . , n).(12.25)
We substitute (12.24) into (12.23) and perform simplce calculations, to arrive at the
identity:
A1 + A2 :=
n∑
i ,j=1
∫
Ω
ζ2ahijDhkuxjD
hkuxi dx
+
n∑
i ,j=1
∫
Ω
ahijDhkuxjD
hku · 2ζζxi dx
= −∫
Ω
f ·D−hk (ζ2Dhku) dx =: B.
Now the uniform convexity condition (12.22) implies
A1 ≥ θ∫
Ω
ζ2|dhkDu|2 dx.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
392Chapter 12: Variational Methods
Furthermore we see from (12.21) that
|A2| ≤ C
∫
W
ζ|DhkDu| · |Dhku| dx
≤ ε
∫
W
ζ2|DhkDu|2 dx +C
ε
∫
W
|Dhku|2 dx.
Furthermore, as in the proof of interior H2 regularity of elliptic PDE, we have
|B| ≤ ε∫
Ω
ζ2|DhkDu|2 dx +C
ε
∫
Ω
f 2 + |Du|2 dx.
We select ε = θ4 , to deduce from the foregoing bounds on A1, A2, B, the estimate
∫
Ω
ζ2|DhkDu|2 dx ≤ C∫
W
f 2 + |Dhku|2 dx ≤ C∫
Ω
f 2 + |Du|2 dx,
the last inequaltiy valid from the properties of difference quotients.
2. Since ζ ≡ 1 on V , we find
∫
V
|DhkDu|2 dx ≤ C∫
Ω
f 2 + |Du|2 dx
for k = 1, . . . , n and all sufficiently small |h| > 0. Consequently prop of diff. quotients
imply Du ∈ H1(V ), and so u ∈ H2(V ). This is true for each V ⊂⊂ U; thus u ∈H2
loc(Ω).
3. If u ∈ H10(Ω) is a weak solution of (12.19) and ∂U is C2, we can then mimic the
proof of the boundary regularity theorem 4 of elliptic pde to prove u ∈ H2(Ω), with
the estimate
||u||H2(Ω) ≤ C(||f ||L2(Ω) + ||u||H1(Ω));
details are left to the reader. Now from (12.22) follows the inequality
(DL(p)−DL(0)) · p ≥ θ|p|2 (p ∈ Rn).
If we then put v = u in (12.20), we can employ this estimate to derive the bound
||u||H1(Ω) ≤ C||f ||L2(Ω),
and so finish the proof.
12.8.3 Remarks on Higher Regularity
We would next like to show that if L is infinitely differentiable, then so is u. By analogy with
the regularity theory developed for second-order linear elliptic PDE in 6.3, it may seem nat-
ural to try to extend the H2loc estimate from teh previous section to obtain further estimates
in higher order Sobolev spaces Hkloc(Ω) for k = 3, 4, . . .
Gregory T. von Nessi
Versio
n::3
11
02
01
1
12.8 Regularity393
This mehtod will not work for the nonlinear PDE (12.19) however. The reason is this. For
linear equations we could, roughly speaking, differntiate the equation many times and still ob-
tain a linear PDE of the same general form as that we began with. See for instance the proof
of theorem 2 6.3.1. But if we differentiate a nonlinear differential equaiton many times, the
resulting increasingly complicated expressions quickly becvome impossible to handle. Much
deeper ideas are called for, the full development of which is beyond the scope of this book.
We will nevertheless at least outline the basic plan.
To start with, choose a test function w ∈ C∞c (Ω), select k ∈ 1, . . . , n, and set v = −wxk in
the identity (12.20), where for simplicity we now take f ≡ 0. Since we now know u ∈ H2loc(Ω),
we can integrate by parts to find
∫
Ω
n∑
i ,j=1
Lpipj (Du)uxkxjwxi dx = 0. (12.26)
Next write
u := uxk (12.27)
and
ai j := Lpipj (Du) (i , j = 1, . . . , n). (12.28)
Fix also any V ⊂⊂ Ω. Then after an approximation we find from (12.26)-(12.28) that
∫
Ω
n∑
i ,j=1
ai j(x)uxjwxi dx = 0 (12.29)
for all w ∈ H10(V ). This is to say that u ∈ H1(V ) is a weak solution of the linear, second
order elliptic PDE
−n∑
i ,j=1
(ai j uxj )xi = 0 in V. (12.30)
But we can not just apply our regularity theory from 6.3 to conclude from (12.30) that u is
smooth, the reason being that we can deduce from (12.21) and (12.28) only that
ai j ∈ L∞(V ) (i , j = 1, . . . , n).
However DeGiorgi’s theorem asserts that any weak solution of (12.30) must in fact be locally
Holder continuous for some exponent γ > 0. Thus if W ⊂⊂ V we have u ∈ C0,γ(W ), and
so
u ∈ C1,γloc (Ω).
Return to the definition (12.28). If L is smooth, we now know ai j ∈ C0,γloc (Ω) (i , j = 1, . . . , n).
Then (12.19) and the Schauder estimates from Chapter 8 assert that in fact
u ∈ C2,γloc (Ω).
But then ai j ∈ C1,γloc (Ω); and so another version of Schauder’s estimate implies
u ∈ C3,γloc (Ω).
We can continue this so called “bootstrap” argument, eventually to deduce u is Ck,γloc (Ω) for
k = 1, . . . , and so u ∈ C∞(Ω).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
394Chapter 12: Variational Methods
12.9 Exercises
12.1: [Qualifying exam 08/98] Let Ω ⊂ Rn be open (perhaps not bounded, nor with smooth
boundary). Show that if there exists a function u ∈ C2(Ω) vanishing on ∂Ω for which the
quotient
I(u) =
∫
Ω
|Du|2 dx∫
Ω
u2 dx
reaches its infimum λ, then u is an eigenfunction for the eigenvalue λ, so that ∆u + λu = 0
in Ω.
12.2: [Qualifying exam 08/02] Let Ω be a bounded, open set in Rn with smooth boundary Γ .
Suppose Γ = Γ0 ∪ Γ1, where Γ0 and Γ1 are nonempty, disjoint, smooth (n − 1)-dimensional
surfaces in Rn. Suppose that f ∈ L2(Ω).
a) Find the weak formulation of the boundary value problem
−∆u = f in Ω,
Du · ν = 0 on Γ1,
u = 0 on Γ0.
b) Prove that for f ∈ L2(Ω) there is a unique weak solution u ∈ H1(Ω).
12.3: [Qualifying exam 08/02] Let Ω be a bounded domain in the plane with smooth bound-
ary. Let f be a positive, strictly convex function over the reals. Let w ∈ L2(Ω). Show that
there exists a unique u ∈ H1(Ω) that is a weak solution to the equation
−∆u + f ′(u) = w in Ω,
u = 0 on ∂Ω.
Do you need additional assumptions on f ?
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Part III
Modern Methods forFully Nonlinear EllipticPDE
Ver
sio
n::
31
10
20
11
396Chapter 13: Non-Variational Methods in Nonlinear PDE
13 Non-Variational Methods in Nonlinear PDE
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
13.1 Viscosity Solutions 402
13.2 Alexandrov-Bakel’man Maximum Principle 406
13.3 Local Estimates for Linear Equations 411
13.4 Holder Estimates for Second Derivatives 419
13.5 Existence Theorems 428
13.6 Applications 438
13.7 Exercises 441
13.1 Viscosity Solutions
The problem with the Perron definition of subharmonic and superharmonic functions is that
they relied on a classical notion of a function being harmonic. For a general differential
operator, call it F , there might not be an analogy of a harmonic function as the equation
F [u] = 0 may or may not have a solution in the classical sense. This is opposed to solutions
of F [u] ≥ 0 of F [u] ≤ 0, these solutions can readily be constructed for differential operators.
With this in mind, we make the following alternative definition:
Definition 13.1.1: A function u ∈ C0(Ω) is subharmonic(superharmonic) if for every ball
B ⊂⊂ Ω and superharmonic(subharmonic) function v ∈ C2(B) ∩ C0(B) (i.e. ∆v ≤ (≥) 0)
which is ≥ u on ∂B one has v ≥ (≤) u in B.
It’s not hard to understand the above definition is equivalent to the Perron definition.
Remarks:Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.1 Viscosity Solutions397
i.) By replacing ∆ by other operators, denoted F , we then get general definitions of the
statement “F [u] ≥ (≤) 0” for u ∈ C0(Ω).
ii.) One can define a solution of “F [u] = 0” as a function satisfying both F [u] ≥ 0 and
F [u] ≤ 0.
iii.) The solution of F [u] = 0 in this sense are referred to as viscosity solutions.
We can still further generalize the above definition still further:
Definition 13.1.2: A function u ∈ C0(Ω) is subharmonic(superharmonic) (with respect to
∆) if for every point y ∈ Ω and function v ∈ C2(Ω) such that u−v is maximized(minimized)
at y , we have ∆v(y) ≥ (≤) 0.
Proposition 13.1.3 (): Definitions (13.1.1) and (13.1.2) are equivalent.
Proof: (13.1.1) =⇒ (13.1.2): Let u be subharmonic. Suppose now there exists y ∈ Ω such
that ∆v(y) < 0. Since v ∈ C2(Ω) there exists a ball B containing y such that ∆v(y) ≤ 0 on
B. Now, define vε = v + ε|x − y |4 which implies that y corresponds to a strict maximum of
u − vε. With this we know there exists a δ > 0 such that vε − δ > u on ∂B with vε − δ < u
on some set Eδ ⊂ B; a contradiction.
vǫ
Eδ
B
u
vǫ − δ
Eδ
B
u
u
v
(13.1.2) =⇒ (13.1.1): Suppose that the maximum of u−v is attained at y ∈ Ω implies that
∆v(y) ≥ 0 for any v ∈ C2(Ω). Next, suppose that u is not subharmonic; i.e. there exists a
ball B and superharmonic v such that v ≥ u on ∂B but v ≤ u in a set Eδ ⊂ B. Another way
to put this is to say there exists a point y ∈ B where u − v attains a positive max. Now,
define vε := v + ε(p2 − |x − y |2); clearly ∆vε(y) < 0. For ε small enough u − vε will have
a positive max in B (the constant p can be adjusted so that vε ≥ u on ∂Ω). This yields a
contradiction as ∆vε(y) < 0 by construction.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
398Chapter 13: Non-Variational Methods in Nonlinear PDE
Now, we can make the final generalization of our definition. First, we redefine the idea of
ellipticity. Consider the operator
F [u](x) := F (x, u,Du,D2u), (13.1)
where F ∈ C0(Ω×R×Rn × Sn). Note that Sn represents the space of all symmetric n × nmatrices.
Definition 13.1.4: An operator F ∈ C0(Ω× R× Rn × Sn) is degenerate elliptic if
∂F
∂r i j≥ 0
and
∂F
∂z≤ 0,
where the arguments of F are understood as F (x, z, p, [r i j ]).
Finally, we make our most general definition for subharmonic and superharmonic functions.
Definition 13.1.5: If u is upper(lower)-semicontinuous on Ω, we say F [u] ≥ (≤) 0 in the
viscosity sense if for every y ∈ Ω and v ∈ C2(Ω) such that u − v is maximized(minimized)
at y , one has F [v ](y) ≥ (≤) 0.
Remark: A viscosity solution is a function that satisfies both F [u] ≥ 0 and F [u] ≤ 0 in the
viscosity sense.
This final definition for viscosity solutions even has meaning in first order equations:Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.1 Viscosity Solutions399
Example 10: Consider |Du|2 = 1 in R. It is obvious that generalized solutions will have a
“saw tooth” form and are not unique for specified boundary values. That being understood,
it is easily verified that the viscosity solution to this equation is unique.
y
C2 viscosity violator function
viscosity solution
C0 solutions
To see this, the above figure illustrates a non-viscosity solution with an accompanying C2
function which violates the viscosity definition. At point y , it is clear that u− v is maximized
but that ∆v(y) < 0. Basically, any “peak” in a C0 solution will violate the viscosity crite-
rion with a similar C2 example. The only solution with no “peaks” is the viscosity solution
depicted in the figure.
Aside: A nice way to derive this viscosity solution is to consider solutions uε of the equation:
Fε[u] := ε∆uε + |Duε|2 − 1 = 0.
Clearly, Fε → F , where F [u] := |Du|2 − 1 as ε → 0. It is not clear whether or not uεconverges. Even if uε converged to some limit, say u, it’s not clear whether or not F [u] = 0
in any sense. Fortunately, in this particular example, all this can be ascertained through direct
calculation. Since we are in R, we have
Fε[u] = ε · uεxx + |uεx |2 − 1 = 0.
To solve this ODE take vε := uεx ; this makes the ODE separable. After integrating, one
gets
uε = ε · ln(e
2xε + 1
e−2ε + 1
)− x,
where the constants of integration have been set so u(−1) = u(1) = 1. The following figure
is a graph of these functions.
It is graphically clear that limε→0 uε = |x |; i.e. we have uε converging (uniformly) to
the viscosity solution of |Du|2 = 1. This limit is easy to verify algebraically via the use of
L’Hopital’s rule.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
400Chapter 13: Non-Variational Methods in Nonlinear PDE
ǫ = 12
O
|x |ǫ = 1
4
1
1−1ǫ = 1
8
13.2 Alexandrov-Bakel’man Maximum Principle
Let u ∈ C0(Ω) and consider the graph u given by xn+1 = u(x) as well as the hyperplane
which contacts the graph from below. For y ∈ Ω, set
χu(y) := p ∈ Rn∣∣∣ u(x) ≥ u(y) + (x − y) · p ∀x ∈ Ω.
Define the lower contact set Γ− or Γ−u by
Γ− :=p ∈ Rn
∣∣∣ u(x) ≥ u(y) + (x − y) · p ∀x ∈ Ω
for some p = p(y) ∈ Rn
= points of convexity of graph of u.
Proposition 13.2.1 (): Let u ∈ C0(Ω). We have
i.) if u is differentiable at y ∈ Γ− then χu(y) = Du(y),
ii.) if u is convex in Ω then Γ− = Ω and χu is a subgradient of u,
iii.) if u is twice differentiable at y ∈ Γ− then D2u(y) ≥ 0
Investigate the relation between χu, max u, and min u. Let p0 /∈ χu(Ω) :=⋃y∈Ω χu(y).
By definition, there exists a point y ∈ ∂Ω such that
u(x) ≥ u(y) + p0 · (x − y) ∀x ∈ Ω. (13.2)
You can see 13.2) by considering the hyperplane of the type xn+1 = p0 · x + C, first
placing far from below the graph u and tending it upward.(13.2) yields an estimate of u; i.e.,
u ≥ min∂Ω
u − |p0|d, (13.3)
where d = Diam (Ω) as before.
The relation with differential operator is as follows. Suppose u ∈ C2(Ω), then χu(y) =
Du(y) for y ∈ Γ− and the Jacobian matrix of χu is D2u ≥ 0 in Γ−. Thus, the n-dimensional
Lebesgue measure of the normal image of Ω is given by
|χ(Ω)| = |χ(Γ−)| = |Du(Γ−)| ≤∫
Γ−| detD2u| dx (13.4)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.2 Alexandrov-Bakel’man Maximum Principle401
Ω
u
p0 · (x − y)
x
since D2u ≥ 0 on Γ−. (13.4) can be realised as a consequence of the classical change of
variables formula by considering, for positive ε, the mapping χε = χ + εI, whose Jacobian
matrix D2u + εI is then strictly positive in the neighbourhood of Γ−, and by subsequently
letting ε→ 0. If we can show that χε is 1-to-1, then equality will in fact hold in (13.4). To
show this, suppose not, i.e. there exists x 6= y such that χε(x) = χε(y). In this case we
have
χ(x)− χ(y) = ε(y − x).
But, we also have (after using abusive notation), that
u(x) ≥ u(y) + (x − y) · χ(y)
u(y) ≥ u(x) + (y − x) · χ(x)
Adding these we have
0 ≥ (χ(x)− χ(y))(y − x) = ε(y − x)2 ≥ 0,
which leads us to conclude y = x , a contradiction.
More generally, if h ∈ C0(Ω) is positive on Rn, then we have
∫
χu(Ω)
dp
h(p)=
∫
Γ−
detD2u
h(Du)dx. (13.5)
Remark: It is trivial to extend this to u ∈ (C2(Ω) ∩ C0(Ω)) ∪ W 2,n(Ω) by using ap-
proximating functions and Lebesgue dominated convergence. Also note that even thous
C2(Ω) ⊂ W 2,n(Ω), C2(Ω)∩C0(Ω) * W 2,n(Ω). Not sure if further generalization is possible.
Theorem 13.2.2 (): Suppose u ∈ C2(Ω) ∩ C0(Ω) satisfies
detD2u ≤ f (x)h(Du) on Γ− (13.6)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
402Chapter 13: Non-Variational Methods in Nonlinear PDE
for some positive function h ∈ C2(Rn). Assume moreover,
∫
Γ−f dx <
∫
Rn
dp
h(p).
Then there holds an estimate of the type
minΩu ≥ min
∂Ωu − Cd,
where C denotes some constant independent of u.
Proof. Let BR(0) be a ball of radius R centered at 0 such that
∫
χu(Ω)
dp
h(p)≤∫
Γ−f dx =
∫
BR(0)
dp
h(p)<
∫
Rn
dp
h(p).
Thus, for R′ > R, there exists p /∈ χu(Ω) such that |p| < R′. So (13.3) now implies that
minΩu ≥ min
∂Ωu − Rd,
which completes the proof.
13.2.1 Applications
13.2.1.1 Linear equations
First we consider the case h ≡ 1 in theorem 13.2.2. The result is simply stated as follows:
The differential inequality
detD2u ≤ f in Γ−
implies an estimate
minΩu ≥ min
∂Ω−(
1
ωn
∫
Γ−f dx
)1/n
· d.
Next, suppose u ∈ C2(Ω) ∩ C0(Ω) satisfies
Lu := ai jDi ju ≤ g in Ω,
where A := [ai j ] > 0 and g a given bounded function. We derive
detA · detD2u ≤( |g|n
)non Γ−,
by virtue of an elementary matrix inequality for any symmetric n × n matrices A,B ≥ 0:
detA · detB ≤(T r(AB)
n
)n,
whose proof is left as an exercise.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.2 Alexandrov-Bakel’man Maximum Principle403
As det [AB] = det [A] · det [B], we really only need to show that det [A] ≤(T rAn
)n. Since
the matrices are non-singular we assume the matrix A is diagonalized wlog. So,
det [A] =
n∏
i=1
λi = exp
(n ·
n∑
i=1
lnλin
)≤(
1
n
n∑
i=1
λi
)n=
(T rA
n
)n
The inequality of the of the above comes from the convexity of exponential functions.
Therefore it follows that
minΩu ≥ min
∂Ωu − d
nω1/nn
∣∣∣∣∣∣∣∣
g
(detA)1/n
∣∣∣∣∣∣∣∣Ln(Γ−)
.
We can find an estimate for maxΩ u similarly; if Lu ≥ g in Ω, thenwe have
maxΩu ≤ max
∂Ωu − d
nω1/nn
∣∣∣∣∣∣∣∣
g
(detA)1/n
∣∣∣∣∣∣∣∣Ln(Γ+)
,
where Γ+u = Γ−−u.
13.2.1.2 Monge-Ampere Equation
Suppose a convex function u ∈ C2(Ω) solves
det[D2u
]≤ f (x) · h(Du),
for some functions f , h which are assumed to satisfy∫
Ω
f (x) dx <
∫
Rn
dp
h(p)
Then we have an estimate
infΩu ≥ inf
∂Ωu − Rd,
where R is defined by∫
BR(0)
dp
h(p)=
∫
Ω
f (x) dx.
This result is a consequence of theorem 13.2.2. If Ω is convex, then we infer supΩ u ≤sup∂Ω u, taking into account the convexity of u.
Remark: If det[D2u
]= f (x) · h(Du) in Ω, then
∫
Ω
det[D2u
]
h(Du)=
∫
Ω
f (x) dx.
Thus, the inequality∫
Ω
f (x) dx ≤∫
Rn
dp
h(p)
is a necessary condition for solvability. It is also sufficient, by the work of Urbas. The
inequality is strict if Du is bounded.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
404Chapter 13: Non-Variational Methods in Nonlinear PDE
13.2.1.3 General linear equations
Consider a solution u ∈ C2(Ω) ∩ C0(Ω) of
Lu := ai jDi ju + biDiu ≤ f .
We rewrite this in the form
ai jDi ju ≤ −biDiu + f .
Define
h(p) = (|p| nn−1 + µ
nn−1 )n−1
for µ :=∣∣∣∣∣∣ fD1/n
∣∣∣∣∣∣Ln(Γ+)
6= 0 and D := det [A]. We infer the following estimate as before:
maxΩu ≤ max
∂Ωu + C
∣∣∣∣∣∣∣∣f
D1/n
∣∣∣∣∣∣∣∣Ln(Γ+)
,
where C depedends on n, d , and∣∣∣∣∣∣ fD1/n
∣∣∣∣∣∣Ln(Γ+)
.
13.2.1.4 Oblique boundary value problems
The above estimates can all be extended to boundary conditions of the form
β ·Du + γ · u = g on ∂Ω
where β · ν > 0 on ∂Ω, γ > 0 on Ω and g is a given function. Here ν denotes the outer
normal on ∂Ω. We obtain the more general estimate
infΩu ≥ inf
∂Ω
g
γ− C
(sup∂Ω
|β|γ
+ d
)
where d = Diam (Ω).
13.3 Local Estimates for Linear Equations
13.3.1 Preliminaries
Consider the linear operator Lu := ai jDi ju with strict ellipticity condition:
λI ≤ [ai j ] ≤ ΛI for0 < λ ≤ Λ, λ,Λ ∈ R.
To see the reason why we discuss the linear operator Lu, we deal ith the following nonlinear
equation of simple case:
F (D2u) = 0. (13.7)
Differentiate (13.8) with respect to a non-zero vector γ, to obtain
∂F
∂ri j(D2u)Di jγu = 0.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.3 Local Estimates for Linear Equations405
If we set L := ∂F∂ri jDi j , then we have LDγu = 0. Differentiate again and you find
∂F
∂ri jDi jγγu +
∂2F
∂ri j∂rklDi jγu ·Dklγu = 0.
If F is concave with respect to r , which most of the important examples satisfy, then we
derive
∂F
∂ri j(D2u)Di jDγγu ≥ 0,
i .e., L(Dγγu) ≥ 0.
This is the linear differential inequality and justifies our investigation of the linear operator
Lu = ai jDi ju.
13.3.2 Local Maximum Principle
Theorem 13.3.1 (): Let Ω ⊂ Rn be a bounded open domain and let BR := BR(y) ⊂ Ω be
a ball. Suppose u ∈ C2(Ω) fulfuills Lu ≥ f in Ω for some given function f . Then for any
0 < σ < 1, p > 0, we have
supBσR
u ≤ C(
1
Rn
∫
BR
(u+)p dx
)1/p
+R
λ||f ||Ln(BR)
,
where C depends on n, p, σ and Λ/λ. u+ = maxu, 0 and BσR(y) denotes the concentric
subball.
Proof. We may take R = 1 in view of the scaling x → x/R, f → R2f . Let
η =
(1− |x |2)β, β > 1 in B1
0 in Ω \ B1.
It is easy to verify that
|Dη| ≤ Cη1−1/β, |D2η| ≤ Cη1−2/β, (13.8)
for some constant C.
Introduce the test function v := η(u+)2 and compute
Lv = ai jDi jη(u+)2 + 2ai jDiηDj((u+)2) + ηai jDi j((u+)2)
≥ −CΛη1−2/β(u+)2 + 4u+ai jDiηDju+
+2ηai jDiu+ ·Dju+ + 2ηu+ai jDi ju
+
≥ −CΛη1−2/β(u+)2 + 2ηu+f .
here the use of (13.8) and the next inequality was made:
|ai jDiηDju+| ≤ (ai jDiηDjη)1/2(ai jDiu+Dju
+)1/2.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
406Chapter 13: Non-Variational Methods in Nonlinear PDE
Apply the Alexandrov maximum principle 2.3.1 to find
supB1
v ≤ C(n)
λ||Λη1−2/β(u+)2 + ηu+f ||Ln(B1)
≤ C(n)
λ||Λv1−2/β(u+)4/β + v1/2η1/2f ||Ln(B1)
≤ C
Λ
λ||v1−2/β(u+)4/β||Ln(B1) +
1
λ||v1/2η1/2f ||Ln(B1)
≤ C
Λ
λ
(supB1
v
)1−2/β
||(u+)4/β||Ln(B1) +
(supB1
v)1/2
λ||f ||Ln(B1)
.
Choosing β = 4n/p, we deduce
supB1
v1/2 ≤ C
Λ
λ
(supB1
v
)1/2−2/β (∫
B1
(u+)p dx
)1/n
+1
λ||f ||Ln(B1)
,
from which we conclude, invoking the Young inequality, that
supB1
v1/2 ≤ C(
Λ
λ, n, p
)·(∫
B1
(u+)p dx
)1/p
+1
λ||f ||Ln(B1)
.
This completes the proof.
13.3.3 Weak Harnack inequality
Lemma 13.3.2 (): Let u ∈ C2(Ω) satisfies Lu ≤ f , u ≥ 0 in BR := BR(y) ⊂ Ω, for some
function f . For 0 < σ < τ < 1, we have
infBσR
u ≤ C
infBτR
u +R
λ||f ||Ln(BR)
,
where C = C(n, σ, τ,Λ/λ).
Proof. Set for k > 0
w :=|x/R|−k − 1
σ−k − 1infBσR
u.
There holds w ≤ u on ∂BσR and ∂BR. Now, compute
L|x |−k = ai jDi j |x |−k
= ai j(−kδi j |x |−k−2 + k(k + 2)xixj |x |−k−4)
= −k |x |−k−2
(T rA− (k + 2)
ai jxixj|x |2
)
≥ −k |x |−k−2(T rA− (k + 2)λ)
≥ 0,
if we choose k > T rAλ − 2. Thus
Lw ≥ 0,
L(u − w) ≤ f .Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.3 Local Estimates for Linear Equations407
Applying the Alexandrov maximum principle again, we obtain
u ≥ w − CRλ||f ||Ln(BR).
Take the infimum and you get the desired result.
Theorem 13.3.3 (): Let u ∈ C2(Ω) satisfies Lu ≤ f , u ≥ 0 in BR := BR(y) ⊂ Ω, for some
function f . Then for any 0 < σ,τ < 1, there exists p > 0 such that
(1
Rn
∫
BσR
up dx
)1/p
≤ C
infBτR
u +R
λ||f ||Ln(BR)
,
where p, C depend on n, σ, τ and Λ/λ.
Proof. WLOG thake y = 0 so that BR(0) = BR. Consider
η(x) = 1− |x |2
R2;
then η − u ≤ 0 on ∂BR with L(η − u) = Lη − Lu ≤ −2T rAR2 − g the above lemma now
indicates
η − u ≤ d
nω1/nn
∣∣∣∣∣
∣∣∣∣∣−2T rA
det [A]1/n · R2− g
det [A]1/n
∣∣∣∣∣
∣∣∣∣∣Ln(x∈BR | η−u≥0)
≤ d
λnω1/nn
∣∣∣∣∣∣∣∣−2
T rA
R2− g∣∣∣∣∣∣∣∣Ln(x∈BR | η−u≥0)
≤ Λ
λ
d
nω1/nn
∣∣∣∣∣∣∣∣−
2
R2− g
T rA
∣∣∣∣∣∣∣∣Ln(x∈BR | η−u≥0)
≤ CΛ
λ|x ∈ BR | η ≥ u| ≤
1
2η,
if
|u ≤ 1 ∩ BR||BR|
(13.9)
is suffciently small. Thus, we have u ≥ C on BσR and apply the previous lemma to ascertain
u ≥ C on BτR. Remarks:
i.) This ascertion contains all the ’information’ for the PDE.
ii.) We essentially shrink the ball after the calculation and 3.5 to then expand the ball back
out (perterbing by measure).
At this point, in order to simplify a later measure theoretic argument, we change our
basis domain from balls to cubes. Also take f = 0 and drop the bar for simplicity.
Let K := KR(y) be an open cube, parallel to the coordinate axis with center y and side
length 2R. The corresponding estimate of (13.9) is as follows: if
|u(x) ≥ 1 ∩K| ≥ θ|K| and K3R ⊂ B := B1, (13.10)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
408Chapter 13: Non-Variational Methods in Nonlinear PDE
then u ≥ γ on KR. Now consider any fixed cube K0 in Rn with Γ ⊂ K0 a measurable subset
and 0 < θ < 1. Defining
Γ :=⋃K3R(y) ∩K0
∣∣∣ |KR(y) ∩ Γ | ≥ θ|KR(y)|,
we have either Γ = K0 or |Γ | ≥ θ−1|Γ | by the cube decomposition at the beginning of the
notes. With this in mind, we make the following claim:
infKu ≥ C
( |Γ ||K|
)ln γ/ ln θ
. (13.11)
Proof of Claim We use induction. So if we take Γ := x ∈ K | u ≥ γ0 = 1, then we have
u ≥ γ on Γ with |Γ | ≥ θ−1|Γ |. This is the m = 1 step and is the result of the previous
argument in the proof. Now, we assume the condition |Γ | > θm|K|, m ∈ N, implies the
estimate
u ≥ γm+0 = γm on K.
The representation of γ0 = 1 may seem absurd at this point, but it will clarify the final
inductive step immensely. Now, to proceed to m + 1, we first note that given the inductive
hypothesis Γ also satisfies the weaker condition |Γ | ≥ θm+1|K|. Recalling the m = 1 case,
we also have from (13.10) that
u ≥ γ on Γ .
Now cube decomposition indicates that |Γ | ≥ θ−1|Γ | ≥ θm|K|. Thus, we ascertain
u ≥ γm+1 on K.
This argument is tricky in a sense that γ is not only apart of the inductive result, it’s a
paramater of Γ and Γ (hence the γ0) notation. Once this is clear, we get the claim by
picking γ and C correctly.
Returning to the ball, we infer that
infBu ≥ C
( |Γ ∩ B||B|
)κ, (13.12)
where C and κ are positive constants depending only on n and Λ/λ. Now define
Γt := x ∈ B | u(x) ≥ t.
Replacing u by u/t in (13.13) yields
infBu ≥ Ct
( |Γ ∩ B||B|
)κ.
To obtain the weak Harnack inequality, we normalize infB u = 1 and hence
|Γt | ≤ C · t−1/k .Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.3 Local Estimates for Linear Equations409
Choosing p = (2κ)−1, we have
∫
B
up dx =
∫
B
∫ u(x)
0
ptp−1 dtdx
=
∫ ∞
0
ptp−1 · |x ∈ B | u ≥ t| dt
≤∫ ∞
0
ptp−1|Γt | dt
≤ C.
the second inequality is the coarea formula from geometric measure theory. Intuitivly, this
can be thought of as slicing the B × [0, u(x)] by hyperplanes w(x, t) = t, and integrating
these. Thus, we have obtained our result for B. The completion of the proof follows from
a covering argument.
∫
Ω
g(x) · |Jmf (x)| dx =
∫
Rm
∫
f −1(y)
g(x) dHn−m(x)dy
Theorems (13.3.1) and (13.3.3) easily give
Theorem 13.3.4 (): Suppose Lu = f , u ≥ 0 in BR := BR(y) ⊂ Ω. Then for any 0 < σ < 1,
we have
supBσR
u ≤ C(
infBRu +
R
λ||f ||Ln(B)
),
where C = C(n, σ,Λ/λ).
13.3.4 Holder Estimates
Lemma 13.3.5 (): Suppose a non-decreasing function ω on [0, R0] satisfies
ω(σR) ≤ σ · ω(R) + Rδ
for all R ≤ R0. Here σ < 1, δ > 0 are given constants. Then we have
ω(R) ≤ C(R
R0
)αw(R0),
where C, α depend on σ and δ.
Proof. We may take δ < 1. Replace ω by
ω(R) := ω(R)− ARδ,
and we obtain ω(σR) ≤ σ · ω(R), selecting A suitably. Thus, we have by iteration
ω(σνR) ≤ σν · ω(R0).
Choosing ν such that σν+1R0 < R ≤ σνR0 gives the result upon noting that ω is non-
decreasing.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
410Chapter 13: Non-Variational Methods in Nonlinear PDE
Now we present the next pointwise estimate, which is due to Krylov ans Safonov.
Theorem 13.3.6 (): Suppose u satsifies Lu = f in Ω ⊂ Rn for some f ∈ Ln(Ω). Let
BR := BR(y) ⊂ Ω be a ball and 0 < σ < 1. Then we have
oscBσR
u ≤ CσαoscBR
u +R
λ||f ||Ln(BR)
, (13.13)
where α = α(n,Λ/λ) > 0 and C = C(n,Λ/λ).
This is analogous to the famous De Giorgi-Nash estimate for the divergence equation
Di(ai j(x)Dju) = u.
Proof. Take BR = BR(y) ⊂ Ω as in the theorem and define
MR := supBR
u, mR := infBRu.
There holds
oscBR
u = MR −mr =: ω(R).
Applying the weak Harnack inequality to MR − u and mr − u respectively, we obtain
(1
Rn
∫
BσR
(MR − u)p dx
)1/p
≤ C
MR −MσR +
R
λ||f ||Ln(BR)
(1
Rn
∫
BσR
(u −mR)p dx
)1/p
≤ C
mσR −MR +
R
λ||f ||Ln(BR)
.
Add these two inequalities and you get
ω(R) ≤ Cω(R)− ω(σR) + R, (13.14)
where C depends on ||f ||Ln(BR). Here we have used the inequality:
(∫|f + g|p dx
)1/p
≤ 21/p
(∫|f |p dx
)1/p
+
(∫|g|p dx
)1/p.
(13.14) can be rewritten in the form
ω(σR) ≤ (1− 1/C)ω(R) + R.
Applying lemma 13.3.5 now yields the desired result.
13.3.5 Boundary Gradient Estimates
Now we turn our attention to the boundary gradient estimate, due to Krylov. The situation
is as follows: Let B+R be a half ball of radius R centered at the origin:
B+R := BR(0) ∩ xn > 0
Suppose u satisfiesLu := ai jDi ju = f in B+
R ,
u = 0 on BR(0) ∩ xn = 0. (13.15)
Then we haveGregory T. von Nessi
Versio
n::3
11
02
01
1
13.3 Local Estimates for Linear Equations411
Theorem 13.3.7 (): Let u fulfil (13.15) and let 0 < σ < 1. Then there holds
oscB+σR
u
xn≤ Cσα
oscB+R
u
xn+R
λsupB+R
|f |.
where α > 0,C > 0 depend on n and Λ/λ.
From this theorem, we infer the Holder estimate for the gradient on the boundary xn = 0.
In fact
osc|x ′|<σR
Du(x ′, 0) ≤ Cσα
supB+R
|Du|+ R
λsupB+R
|f |
holds, where x ′ = x1, . . . , xn−1. Proof: Define v := uxn
. We may assume that v is bounded
in B+R . Write for some δ > 0,
BR,δ := |x ′| < R, 0 < xn < δR.
Under the assumption that u ≥ 0 in B+R , we want to prove the next claim.
Claim: There exists a δ = δ(n,Λ/λ) such that
inf|x ′| < R
xn = δR
v ≤ 2
(infBR
2 ,δ
v +R
λsupB+R
|f |)
for any R ≤ R0.
Proof of Claim: We may normalize as before to take λ = 1, R = 1 and
inf|x ′| < R
xn = δR
v = 1.
Define the comparison function w in B1,δ as
w(x) :=
1− |x ′|2 + (1 + sup |f |)xn − δ√
δ
xn.
It is easy to check that w ≤ u on ∂B1,δ.
Compute
Lw = −2
n−1∑
i=1
ai ixn + 2(1 + sup |f |)ann√δ− 2
n−1∑
i=1
ainxi
≥ f in B1,δ
if we take δ sufficiently small. Thus the maximum principle implies that on B1/2,δ
v ≥ 1− |x ′|2 + (1 + sup |f |)xn − δ√δ
≥ 1
2− sup |f |
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
412Chapter 13: Non-Variational Methods in Nonlinear PDE
again for sufficiently small δ. Normalizing back we obtain the claim.
To resume the proof of the theorem, we apply the Harnack inequality in the region|x ′| < R, δR2 < xn <
3δR2
to obtain
sup|x ′| < R
δR2 < xn <
3δR2
v ≤ C
inf|x ′| < R
2δR2 < xn <
3δR2
v +R
λsup |f |
(13.16)
≤ C
inf|x ′| < R
xn = δR
v +R
λsup |f |
≤ C
(infBR
2 ,δ
v +R
λsup |f |
).
The last inequality comes from the claim.
Now we set up the oscillation estimate: Define
MR := supBR,δ
v , mR := infBR,δ
v
ω(R) := MR −mR = oscBR,δ
v .
Consider M2R − v , v −m2R and invoke (13.16). We find
M2R − inf|x ′| < R
δR2 < xn <
3δR2
v ≤ C
M2R − sup
BR2 ,δ
v +R
λsup |f |
inf|x ′| < R
δR2 < xn <
3δR2
v −m2R ≤ C
(infBR
2 ,δ
v −m2R +R
λsup |f |
)
Adding each terms tells you
ω(2R) ≤ C(ω(2R)− ω(R/2) +
R
λsup |f |
).
Lemma 13.3.5 now implies our desired result:
ω(R) ≤ C(R
R0
)αω(R0) +
R0
λsup |f |
.
13.4 Holder Estimates for Second Derivatives
In this section, we wish to derive the second derivative estimates for general nonlinear equa-
tion of simple case
F (D2u) = ψ in Ω, (13.17)Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.4 Holder Estimates for Second Derivatives413
where Ω ⊂ Rn is a domain. General cases can be handled by perterbation arguments. See
Safonov [Sa2].
We begin with the interior estimates of Evans [Eva82] and Krylov [Kry82]. For further
study, suitable function spaces are introduced. Various interpolation inequalities are recalled.
Finally, we show the global bound for second derivates, due to Krylov. For the interior
estimate, we follow the treatment of [GT01] but with an idea of Safonov [Saf84] being used
to avoid the Motzkin-Wasco lemma. The corresponding boundary estimate will be derived
from Theorem 13.3.7.
13.4.1 Interior Estimates
Concerning (13.17), it is assumed to satisfy F ∈ C2(Rn×n), u ∈ C4(Ω), ψ ∈ C2(Ω) and
i.) λI ≤ Fr (D2u) ≤ ΛI, where 0 < λ ≤ Λ and Fr := [∂F/∂ri j ];
ii.) F is concave on some open set U ⊂ Rn×n including the range of D2u.
Under these assumptions, our first result is the next
Theorem 13.4.1 (): Let BR = BR(y) ⊂ Ω and let 0 < σ < 1. Then we have
oscBσR
D2u ≤ Cσα(oscBR
D2u +R
λsup |Dψ|+ R2
λsup |D2ψ|
),
where C,α > 0 depend only on n,Λ/λ.
Proof: Recall the connection between (13.17) and the linear differential inequalities stated
at 3.1.
Differentiating (13.17) with respect to γ twice, we obtain
Fi jDi jDγγu ≥ Dγγψ, (13.18)
in view of the assumption ii.), where Fi j := Fri j . If we set
ai j(x) := Fi j(D2u(x)), L := ai jDi j ,
then there holds
Lw ≥ Dγγψ for w = w(x, γ) := Dγγu
and λI ≤ [ai j ] ≤ ΛI.
Take a ball B2R ⊂ Ω and define as before
M2(γ) := supB2R
w(·, γ), M1(γ) := supBR
w(·, γ),
m2(γ) := infB2R
w(·, γ), m1(γ) := infBRw(·, γ).
Applying the weak Harnack inequality, Theorem to M2 − w on BR, we fine
R−n
∫
BR
(M2(γ)− w(x, γ))p dx
1/p
≤ CM2(γ)−M1(γ) +
R2
λsup |D2ψ|
.(13.19)
To establish the Holder estimat, we need a bound for −w . At present, however, we only
have the inequality (13.18). Thus we return to the equation
F (D2u(x))− F (D2(y)) = ψ(x)− ψ(y),
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
414Chapter 13: Non-Variational Methods in Nonlinear PDE
for x ∈ BR, y ∈ B2R. Since F is concave:
ψ(y)− ψ(x) ≤ Fi j(D2u(x))(Di ju(y)−Di ju(x)) (13.20)
= ai j(x)(Di ju(y)−Di ju(x))
=: αi(x)µi ,
where µ1 ≤ · · · ≤ µn are the eigenvalues of the matrix D2u(y)−D2u(x) and λ ≤ αi ≤ Λ.
Setting
ω(2R) := max|γ|=1
(M2(γ)−m2(γ)), ω(R) := max|γ|=1
(M1(γ)−m1(γ)),
ω∗(2R) :=
∫
|γ|=1
(M2(γ)−m2(γ)) dγ, ω∗(R) :=
∫
|γ|=1
(M1(γ)−m1(γ)) dγ,
if follows by integration of (13.19) that
R−n
∫
|γ|=1
∫
BR
(M2(γ)− w(x, γ))p dxdγ
1/p
(13.21)
≤ Cω∗(2R)− ω∗(R) +
R2
λsup |D2ψ|
.
For any fixed x ∈ BR, one of the inequalities
max|γ|=1
(M1(γ)− w(x, γ)), max|γ|=1
(w(x, γ)− w(y , γ)) ≥ 1
2ω(R)
must be valid. Suppose it is the second one and choose y ∈ BR so that also
max|γ|=1
(w(x, γ)− w(y , γ)) ≥ 1
2ω(R).
The from (13.20) we have
µn = max|γ|=1
(w(y , γ)− w(x, γ)) ≥ − λ
(n − 1)Λµ1 −
2R
(n − 1)Λsup |Dψ|
≥ λ
2(n − 1)Λω(R)− 2R
(n − 1)Λsup |Dψ|.
Consequently, in all cases we obtain
max|γ|=1
(M1(γ)− w(x, γ)) ≥ δ(ω(R)− 4R
λsup |Dψ|
),
for some positive constant δ depending on n, Λ/λ.But then, because w is a quadratic form
in γ, we must have
∫
|γ|=1
(M1(γ)− w(x, γ)) dγ ≥ δ∗(ω(R)− 4R
λsup |Dψ|
)
for a further δ∗ = δ∗(n,Λ/λ), so that from (13.21), we obtain
ω(R) ≤ Cω∗(2R)− ω∗(R) +
R
λsup |Dψ|+ R2
λsup |D2ψ|
and the result follows by Lemma 13.3.5 and the following lemma of Safonov [Saf84].Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.4 Holder Estimates for Second Derivatives415
Lemma 13.4.2 (): We have
ω(R) ≤ C1ω∗(R) ≤ C2ω(R),
for some positive constants C1 ≤ C2.
Proof: Since the right hand side inequality is obvious, it sufficies to prove
M1(γ)−m1(γ) ≤ C1ω∗(R)
for any |γ| = 1, so that without loss of generality, we can assume γ = e1. Writing
2D11u(x) = w(x, γ + 2e1)− 2w(x, γ + e1) + w(x, γ),
we obtain by integration of |γ| < 1, for any x, y ∈ BR,
2ωn(D11u(y)−D11u(x)) =
(∫
B1(2e1)
−2
∫
B1(e1)
+
∫
B1(0)
)(w(y , γ)− w(x, γ))
≤ 2
∫
B3(0)
(M1(γ)−m1(γ)) dγ
= 2
∫ 3
0
rn+1 dr
∫
|γ|=1
(M1(γ)−m1(γ)) dγ
and hence the result follows with C1 = 3n+2
(n+2)ωn.
The proof of the theorem is now complete.
13.4.2 Function Spaces
Here we introduce some suitable function spaces for further study on the classical theory
of second order nonlinear elliptic equations. The notations are compatible with those of
[GT01].
Let Ω ⊂ Rn be a bounded domain and suppose u ∈ C0(Ω). Define seminorms:
[u]0,α;Ω := supx,y∈Ω,x 6=y
|u(x)− u(y)||x − y |α , 0 < α ≤ 1,
[u]k;Ω := sup|β|=k,x∈Ω
|Dβu(x)|, k = 0, 1, 2, . . . ,
and the space
Ck,α(Ω) :=
u ∈ C
k(Ω)∣∣∣ ||u||k,α;Ω :=
k∑
j=0
[u]j ;Ω +∑
|β|=k[Dβu]α;Ω <∞
.
Ck,α(Ω) is a Banach space under the norm || · ||k,α;Ω. Moreover, we see
Ck(Ω) = Ck,0(Ω)
with the norm ||u||k;Ω :=∑kj=0[u]j ;Ω.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
416Chapter 13: Non-Variational Methods in Nonlinear PDE
Recall
Ck,α(Ω) = u ∈ Ck(Ω)∣∣ u∣∣Ω′ ∈ C
k,α(Ω′) for any Ω′ ⊂⊂ Ω,
and define the interior seminorms:
[u]∗α;Ω := supΩ′⊂⊂Ω
dαΩ′ [u]α;Ω, 0 < α ≤ 1,
where u ∈ C0,α(Ω), dΩ′ := Dist (Ω′, ∂Ω). Similarly
[u]∗k;Ω := supΩ′⊂⊂Ω
dkΩ′ [u]k;Ω′ , k = 0, 1, 2, . . . ,
[u]∗k,α;Ω := supΩ′⊂⊂Ω,|β|=k
dk+αΩ′ [Dβu]k;Ω′ ,
and
Ck,α∗ (Ω) :=
u ∈ C
k(Ω)∣∣∣ ||u||∗k,α;Ω :=
k∑
j=0
[u]∗j ;Ω + [u]∗k,α;Ω <∞
.
Ck,α∗ (Ω) is a Banach space under the norm || · ||∗k,α;Ω.
13.4.3 Interpolation inequalities
For the aboe defined spaces, there hold various interpolation inequalities.
Lemma 13.4.3 (): Suppose j+β < k+α, where j, k = 0, 1, 2, . . ., and 0 ≤ α, β ≤ 1. Let Ω
be an open subset of Rn+ with a boundary portion T on xn = 0, and assume u ∈ Ck,α(Ω∪T ).
Then for any ε > 0 and some constant C = C(ε, j, k) we have
[u]∗j,β;Ω∪T ≤ C|u|0;Ω + ε[u]∗k,α;Ω∪T ′ ,
|u|∗j,β;Ω∪T ≤ C|u|0;Ω + ε[u]∗k,α;Ω∪T . (13.22)
Proof: Again we suppose that the right members are finite. The proof is patterned after
(the interior global result you did earlier) and we emphasize only the details in which the proofs
differ. In the following we omit the subscript Ω ∪ T , which will be implicitly understood.
We consider first the cases 1 ≤ j ≤ k ≤ 2, β = 0, α ≥ 0, starting with the inequality
[u]∗2 ≤ C(ε)|u|0 + ε[u]∗2,α, α > 0. (13.23)
Let x be any point in Ω, dx its distance from ∂Ω − T , and d = µdx where µ ≤ 14 is
a constant to be specificed later. If Dist (x, T ) ≥ d , then the ball Bd(x) ⊂ Ω and the
argument proceeds as in (your interior proof) leading to the inequality
d2x |Di lu(x)| ≤ C(ε)[u]∗1 + ε[u]∗2,α
provided µ = µ(ε) is chosen sufficiently small. If Dist (x, T ) < d we consider the ball B =
Bd(x0) ⊂ Ω, where x0 is on the perpendicular to T passing through x and Dist (x, x0) = d .
Let x ′, x ′′ be the endpoints of the diamter of B parallel to the xl axis. Then we have for
some x on this diamater
|Di lu(x)| =|Diu(x ′)−Diu(x ′′)|
2d≤ 1
dsupB|Diu| ≤
2
µd−2x sup
y∈Bdy |Diu(y)|
≤ 2
µd−2x [u]∗1, since dy > dx/2 ∀y ∈ B;
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.4 Holder Estimates for Second Derivatives417
and
|Di lu(x)| ≤ |Di lu(x)|+ |Di lu(x)−Di lu(x)|
≤ 2
µd−2x [u]∗1 + 2dα sup
y∈Bd−2−αx,y sup
y∈Bd
2+αx,y
|Di lu(x)−Di lu(y)||x − y |α ;
hence
d2x |Di lu(x)| ≤ 2
µ[u]∗1 + 16µα[u]∗2,α
≤ C[u]∗1 + ε[u]∗2,α
provided 16µα ≤ ε, C = 2/µ. Choosing the smaller value of µ, corresponding to the two
cases Dist (x, T ) ≥ d and Dist (x, T ) < d , and taking the supremem over all x ∈ Ω and
i , l = 1, . . . , n, we obtain (the first equation in this proof) for j = k = 1, we proceed as
in (the proof you did for the interior result) with the modifications suggested by the above
proof of (the first equation of the proof). Together with the preceding cases, this gives us
(the equation in the statement of the lemma) for 1 ≤ j ≤ k ≤ 2, β = 0, α ≥ 0.
The proof of (the statement eq) for β > 0 follows closely that in cases (iii) and (iv) of
(the interior proof you did before). The principal difference is that the argument for β > 0,
α = 0 now requires application of the theorm of the mean in the truncated ball Bd(x) ∩ Ω
for points x such that Dist (x, T ) < d .
Lemma 13.4.4 (): Suppose k1 + α1 < k2 + α2 < k3 + α3 with k1, k2, k3 = 0, 1, 2, . . .,
and 0 < α1, α2, α3 ≤ 1. Suppose further ∂Ω ∈ Ck3,α3 . Then for any ε > 0, there exists a
constant Cε = O(ε−k) for some positive k such that
||u||k2,α2;Ω ≤ ε||u||k2,α3;Ω + Cε||u||k1,α;Ω
holds for any u ∈ Ck3,α3 (Ω).
Proof: The proof is based on a reduction to lemma 6.34 by means of an argument very
similar to that in Lemma 6.5. As in that lemma, at each point x0 ∈ ∂Ω let Bρ(x0) be a ball
and ψ be a Ck,α diffeomorphism that straightens the boundary in a neighborhood containing
B′ = Bρ(x0) ∩ Ω and T = Bρ(x0) ∩ ∂Ω. Let ψ(B′) = D′ ⊂ Rn+, ψ(T ) = T ′ ⊂ ∂Rn+. Since
T ′ is a hyperplane portion of ∂D′, we may apply the interpolation inequality from previous
lemma in D′ to the function u = u ψ−1 to obtain
[u]j,β;D′∪T ′ ≤ C(ε)|u|0;D′ + ε|u|∗k,α;D′∪T ′ .
fromLet Ω be a bounded domain with Ck,α boundary portion T, k ≥ 1, 0 ≤ α ≤ 1. Suppose
that Ω ⊂⊂ D, where D is a domain that is mapped by a Ck,α diffeomorphism ψ onto D′.Letting ψ(Ω) = Ω′ and ψ(T ) = T ′, we can define the quantities in (definitions of spaces)
with respect to Ω′ and T ′. If x ′ = ψ(x), y ′ = ψ(y), one sees that
K−1|x − y | ≤ |x ′ − y ′| ≤ K|x − y |
for all points x, y ∈ Ω, where K is a constant depending on ψ and Ω. Letting u(x)→ u(x ′)under the mapping x → x ′, we find after a calculation using the above equation: for 0 ≤ j ≤k ,0 ≤ β ≤ 1, j + β ≤ k + α,
[u]j,β;B′∪T ≤ C(ε)|u|0;D′ + ε|u|∗k,α;B′∪T .
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
418Chapter 13: Non-Variational Methods in Nonlinear PDE
In these inequalities K denotes constants depending on the mapping ψ and the domain Ω.if follows that
[u]j,β;B′∪T ≤ C(ε)|u|0;D′ + ε|u|∗k,α;B′∪T .
(we recall that the same notation C(ε) is being used for different functions of ε.) Letting
B′′ = Bρ/2(x0) ∩Ω, we infer from
We note that |u|∗k;Ω and |u|∗k,α;Ω are norms on the subspaces of Ck(Ω) and Ck,α(Ω)
respectively for which they are finite. If Ω is bounded and d = Diam (Ω), then obviously
these interior norms and the global norms (standard defs) are related by
|u|∗k,α;Ω ≤ max(1, dk+α)|u|k,α;Ω.
If Ω′ ⊂⊂ Ω and σ = Dist (Ω, ∂Ω), then
min(1, σk+α)|u|k,α;Ω′ ≤ |u|∗k,α;Ω
|u|j,β;B′′ ≤ C(ε)|u|0,B′ + ε|u|k,α;B′
≤ C(ε)|u|0;Ω + ε|u|k,α;Ω. (13.24)
Let Bρi/4(xi), xi ∈ ∂Ω, i = 1, . . . , N, be a finite collection of balls covering ∂Ω, such that
the intequality (2nd eq prev lemma) holds in each set B′′i = Bρi/2(xi) ∩ Ω, with a constant
Ci(ε). Let δ = min ρi/4 and C = C(ε) = maxCi(ε). Then at every point x0 ∈ ∂Ω, we have
B = Bδ(x0) ⊂ Bρi/2(xi) for some i and hence
|u|j,β;B∩Ω ≤ C|u|0;Ω + ε|u|k,α;Ω (13.25)
The remainder of the argument is analogous to that in Theorem 6.6 and is left to the
reader.
13.4.4 Global Estimates
Returning to the interior estimate, we can write the second derivative estimate Theorem 4.1
in the form:
[u]∗2,α;Ω ≤ C([u]∗2;Ω + 1),
where C depends on λ,Λ, n, ||ψ||2;Ω and Diam (Ω). Therefore if λ,Λ are independent of u,
we obtain by interpolation
||u||∗2,α;Ω ≤ C(
supΩ|u|+ 1
).
Now we want to prove the next global second derivative Holder estimate due to essentially
Krylov [Kry87].
Theorem 13.4.5 (): Let Ω ⊂ Rn be a bounded domain with ∂Ω ∈ C3. Suppose u ∈C4(Ω) ∩ C3(Ω) fulfils
F (D2u) = ψ in Ω
u = 0 on ∂Ω,
where ψ ∈ C2(Ω). F ∈ C2(Sn) is assumed to satisfy i.) and ii.) in the first theorem of this
section. Then we have an estimate
|u|2,α;Ω ≤ C(|u|2;Ω + |ψ|2;Ω),
where α = α(n,Λ/λ) > 0, C = C(n,Ω,Λ/λ).Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.4 Holder Estimates for Second Derivatives419
Proof: First we straighten the boundary.
Let x0 ∈ ∂Ω. By (Defintion of boundary reg.), there exists a ball B = B(x0) ⊂ Rn and
a one-to-one mapping χ : B → D ⊂ Rn such that
i.) χ(B ∩Ω) ⊂ Rn+ = x ∈ Rn | xn > 0
ii.) χ(B ∩ ∂Ω) ⊂ ∂Rn+iii.) χ ∈ C3(B), χ−1 ∈ C3(Ω).
Let us write y := χ(x). An elementary calculation shows
∂u
∂xi=
∂χk
∂xi(x) · ∂u
∂yk
∂2u
∂xi∂xj=
∂χk
∂xi
∂χl
∂xj· ∂2u
∂yk∂yl+
∂
∂xj
(∂χk
∂xi
)· ∂u∂yk
.
Thus, the ellipticity condition is preserved under the transformation χ with
λI ≤ [J Fr JT ] ≤ ΛI, J := Dχ,
where 0 < λ ≤ Λ < ∞. Moreover, χ also preserves the Holder spaces and transforms the
equation (13.26) into the more general form:
F (x, u,Du,D2u) = 0.
Therefore it suffices to deal with the following situation:
Suppose u satisfiesF (x, u,Du,Du) = 0 in B := B+
R = x ∈ Rn+ |x | < Ru = 0 on T := x ∈ B | xn = 0,
where F ∈ C3(B × R× Rn × Sn).
Differentiation with respectto xk , 1 ≤ k ≤ n − 1 leads
∂F
∂ri jDi jku +
∂F
∂piDiku +
∂F
∂uDku +
∂F
∂xk= 0,
which can be summarized in the formLv = f in B
v = 0 on T.(13.26)
Here we have set v := Dku, 1 ≤ k ≤ n − 1 and
L = ai jDi j :=∂F
∂ri jDi j
−f :=∂F
∂piDi ju +
∂F
∂uDku +
∂F
∂xk,
with λI ≤ [ai j ] ≤ ΛI.
You can apply the Krylov boundary estimate (previous boundary result) to obtain
oscB+δ
v
xn≤ C
(δ
R
)αoscB+R
v
xn+R
λ|f |0;B+
R
≤ C
(δ
R
)α|u|2;B+
R+R
λ|f |0;B+
R
.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
420Chapter 13: Non-Variational Methods in Nonlinear PDE
where α,C depend on n and Λ/λ. By subtracting a linear function of xn from v if necessar,
we can assume that v(x) = O(|xn|2) as x → 0. Thus, fixing R and taking δ = xn, we find
v(x)
x1+αn
≤ C(|u|2;B+R
+ |f |0;B+R
). (13.27)
This is the boundary estimate to start with.
For the interior estimate, Theorem 4.1 gives for any Br ⊂ B+R
oscBσr
≤ Cσα(
supBr
|D2u|+ 1
)
≤ Cσα(
supBr
|DD′u|+ 1
), (13.28)
where D′u = (D1u, . . . , Dn−1u), α = α(n,Λ/λ) > 0 and C depend furthermore on f . The
second inequality in (13.28) comes from the equation itself. In fact, F (x, u,Du,D2u) = 0
can be solved so that Dnnu = function of (x, u,Du,DD′u) and hence
|Dnnu| ≤ C(sup |DD′u|+ 1).
(13.28) implies in particular
oscBσR
DD′u ≤ Cσα(
supBr
|DD′u|+ 1
).
By interpolating the interior norm, we get from (13.27)
[DDku]α;Br ≤ CDist (Br , T )−1−α |D′u|0;Br + 1≤ C(|u|2;B+
R+ |f |0;B+
R), (13.29)
for 1 ≤ k ≤ n − 1.
The equation itself again shows that (13.29) holds true for k = n.
In summary, we finally obtain
[D2u]α;B+RC
(|u|2;B+R
+ |ψ|2;B+R
).
Usual covering argument now yields the full global estimate. This completes the proof.
Corollary 13.4.6 (): Under the assumptions of the previous theorem, we have by interpola-
tion
|u|2,α;Ω ≤ C(|u|0;Ω + |ψ|2;Ω)
≤ C|ψ|2;Ω.
13.4.5 Generalizations
The method of proof of theorem above provides more general results. We present one of
them without proof.
Theorem 13.4.7 (): If u ∈ C2(Ω) ∩ C0(Ω) solves the linar partial differential equation of
the type (13.26):
Lu = f in Ω
u = 0 on ∂Ω,Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.5 Existence Theorems421
for a bounded domain Ω ⊂ Rn with ∂Ω ∈ C2, f ∈ L∞(Ω) and
oscBσR
Du ≤ C0σα
(oscBR
Du + |f |0;Ω + 1
),
for some δ = δ(n, α,Λ/λ), C = C(n, α, C0,Ω,Λ/λ.
The form of the interior gradient estimate can be weakened; (see [Tru84], Theorem 4).
13.5 Existence Theorems
After establishing various estimates, we are now able to investigate the problem of the
existence of solutions. This section is devoted to the solvability for equations of the form
F (D2u) = ψ, (13.30)
Two cases are principally discussed; (i) uniformly elliptic case and (ii) Monge-Ampere equa-
tion.
We begin with recalling the method of continuity. Then (i)(ii) are studied.
13.5.1 Method of Continuity
First we quickly review the usual existence method of continuity.
Let B1 and B2 be Banach spaces and let F : U → B2 be a mapping, where B ⊂ B1 is an
open set. We say that F is differentiable at u ∈ U if there exists a bounded linear mapping
L : B1 → B2 such that
||F [u + h]− F [u]− Lh||B2
||h||B1
→ 0 as h → 0 in B1. (13.31)
We write L = Fu and call it the Frechet derivative F at u. When B1, B2 are Euclidean
spaces, Rn, Rm, the Frechet derivative coincides with the usual notion of differentia, and ,
moreover, the basic theory for the infinite dimensional cas can be modelled on that for the
finite dimensional case as usually treated in advanced calculus. In particular, it is evident
from (the above limit) that the Frechet differentiablity of F at u implies that F is continuous
at u and that the Frechet derivative Fu is determined uniquely by (the limit def above). We
call F continuously differentiable at u if F is Frechet differentiable in a neighbourhood of u
and the resulting mapping
v 7→ Fv ∈ E(B1,B2)
is continuous at u. Here E(B1,B2) denotes the Banach space of bounded linear mapping
from B1 to B2 with the norm given by
||L|| = supv ∈ B1v 6= 0
||Lv ||B2
||v ||B1
.
The chain rule holds for Frechet differentiation, that is, if F : B1 → B2, G : B2 → B3 are
Frechet differentiable at u ∈ B1, F [u] ∈ B2, respectively, then the composite mapping G Fis differentiable at u ∈ B1 and
(G F )u = GF [u] Fu. (13.32)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
422Chapter 13: Non-Variational Methods in Nonlinear PDE
The theorem of the mean also holds in the sense that if u, v ∈ B1, F : B1 → B2 is
differentiable on the closed line segment γ joining u and v in B1, then
||F [u]− F [v ]||B2≤ K||u − v ||B1
, (13.33)
where
K = supw∈γ||Fw ||.
With the aid of these basic properties we may deduce an implicit fuinction theorem for
Frechet differentiable mappings. Suppose that B1, B2 and X are Banach spaces and that
G : B1 × X → B2 is Frechet differentiable at a point (u, σ), where u ∈ B1 and σ ∈ X. The
partial Frechet derivatives, G1(u,σ), G2
(u,σ) at (u, σ) are the bounded linear mappings from B1,
X, respectively, into B2 defined by
G(u,σ)(h, k) = G1(u,σ)(h) + G2
(u,σ)(k)
for h ∈ B1, k ∈ X. We state the implicit function theorem in the following form.
Theorem 13.5.1 (): Let B1, B2 and X be Banach spaces and G a mapping from an open
subset of B1 ×X into B2. Let (u0, σ0) be a point in B1 ×X satisfying:
i.) G[u0, σ0] = 0;
ii.) G is continuously differentiable at (u0, σ0);
iii.) the partial Frechet derivative L = G1(u0,σ0) is invertible.
Then there exists a neighborhood N of σ0 in X such that the equation G[u, σ] = 0, is
solvable for each σ ∈ N , with solution u = uσ ∈ B1.
Sketch of proof: This may be proved by reduction to the contraction mapping principle.
Indeed the equation G[u, σ] = 0, is equivalent to the equation
u = Tu ≡ u − L−1G[u, σ],
and the operator T will, by virtue (13.31) and (13.32), be a contraction mapping in a closed
ball B = Bδ(u0) in B1 for δ sufficiently small and σ sufficiently close to σ0 in X. One
can further show that the mapping F : X → B1 defined by σ → uσ for σ ∈ N , uσ ∈ B,
G[uσ, σ] = 0 is differentiable at σ0 with Frechet derivative
Fσ0 = −L−1G2(u0,σ0).
In order to apply the above theorem we suppost that B1 and B2 are Banach spaces with F
a mapping from an open subset U ⊂ B1 into B2. Let ψ be a fixed element in U and define
for u ∈ U , t ∈ R the mapping G : U × R→ B2 by
G[u, t] = F [u]− tF [ψ].
Let S and E be the subsets of [0, 1] and B1 defined by
S := t ∈ [0, 1] |F [u] = tF [ψ] for some u ∈ UE := u ∈ U |F [u] = tF [ψ] for some t ∈ [0, 1].
Clearly 1 ∈ S, ψ ∈ E so that S and E are not empty. Let us next suppose that the mapping F
is continuously differentiable on E with invertible Frechet derivative Fu. It follows then from
the implicit function theorem (previous theorem) that the set S is open in [0.1]. Consequently
we obtain the following version of the method of continuity.Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.5 Existence Theorems423
Theorem 13.5.2 (): Suppose F is continuously Frechet differentiablewith the invertable
Frechet derivative Fu. Fix ψ ∈ U and set
S := t ∈ [0, 1] |F [u] = tF [ψ] for some u ∈ UE := u ∈ U |F [u] = tF [ψ] for some t ∈ [0, 1].
Then the equation F [u] = 0 is solvable for u ∈ U if S is closed.
The aim is to apply this to the Dirichlet problem. Let us define
B1 := u ∈ C2,α(Ω) | u = 0 on ∂ΩB2 := Cα(Ω),
and let F : B1 → B2 be given by
F [u] := F (x, u,Du,D2u).
Assume F ∈ C2,α(Ω × R × Rn × Sn). Then F is continuously Frechet differentiable in B1
with
Fv [u] =∂F
∂ri j(x, v(x), Dv(x), D2v(x))Di ju
+∂F
∂pi(x, v(x), Dv(x), D2v(x))Diu +
∂F
∂z(x, v(x), Dv(x), D2v(x))u.
Check this formular and observe that F ∈ C2,α is more than enough.
To proceed further, we recall the result of Classical Schauder theory proven in Chapter
9(?) (put here for convenience:
Theorem 13.5.3 (): Consider the linear operator
Lu := ai jDi ju + biDiu + cu,
in a bounded domain Ω ⊂ Rn with ∂Ω ∈ C2,α, 0 < α < 1. Assume
ai j , bi , c ∈ Cα(Ω), c ≤ 0
λI ≤ [ai j ] ≤ ΛI, 0 < λ ≤ ΛI, 0 < λ ≤ Λ <∞.
Then for any function f ∈ Cα(Ω), there exists a unique solution u ∈ C2,α(Ω) satisfying
Lu = f in Ω, u = 0 on ∂Ω and |u|2,α;Ω ≤ C|f |α;Ω, where C depends on n, λ, |ai j , bi , c |α;Ω
and Ω.
Theorem 13.5.4 (): Let Ω ⊂ Rn be a bounded domain with ∂Ω ∈ C2,α for some 0 < α < 1.
Let U ⊂ C2,α(Ω) be open and let ψ ∈ U . Define
E := u ∈ U |F [u] = σF [ψ] for some σ ∈ [0, 1] and u = ψ on ∂Ω,
where F ∈ C2,α(Γ ) with Γ := Ω× R× Rn × Sn. Suppose further
i.) F is elliptic on Ω with respect to u ∈ U ;
ii.) Fz ≤ 0 on U ; (or Fz ≥ 0 on U?)
iii.) E is bounded in C2,α(Ω);
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
424Chapter 13: Non-Variational Methods in Nonlinear PDE
iv.) E ⊂ U , where the closure is taken in an appropriate sense.
Then there exists a unique solution u ∈ U of the following Dirichlet problem:
F [u] = 0 in Ω
u = ψ on ∂Ω.
Proof: We can reduce to the case of zero boundary values by replacing u with u − ψ.
The mapping G : B1 × R→ B2 is then defined by taking ψ ≡ 0 so that
G[u, σ] = F [u + ψ]− σF [ψ].
It then follows from the previous theorem and the remarks proceeding the statement of this
theorem, that the given Dirichlet problem is solvable provided the set S is closed. However
the closure of S (and also of E) follows readily from the boundedness of E (and hypothesis
iv.)) by virtue of the Arzela-Ascoli theorem.
Observe that by this theorem, the solvability of the Dirichlet problem depends upon the
estimates in C2,α(Ω) independent of σ (this is why we need Schauder theory).
13.5.2 Uniformly Elliptic Case
Let Ω ⊂ Rn be a bounded domain with ∂Ω ⊂ C3. Suppose F ∈ C2(Sn) satisfying the
following
i.) λI ≤ Fr (s) ≤ ΛI for all s ∈ Sn with some fixed constants 0 < λ ≤ Λ.
ii.) F is concave.
We also assume without loss of generality that F (0) = 0.
Theorem 13.5.5 ((Krylov [Kry82])): Under the above assumptions, the Dirichlet problem
F (D2u) = ψ in Ω
u = 0 on ∂Ω(13.34)
is uniquely solvable for any ψ ∈ C2(Ω). The solution u belongs to C2,α(Ω) for each 0 <
α < 1.
Proof: Recall the a-priori estimate (global result prev. section)
[u]2,αΩ ≤ C(|u|0;Ω + |ψ|2;Ω),
where α = α(n,Λ/λ) > 0, C = C(n, ∂Ω,Λ/λ).
Since we can write
F (D2u) =
(∫ 1
0
Fri j (θD2u) dθ
)Di ju, (13.35)
by virture of F (0) = 0, the classical maximum principle (find this) gives
|u|0;Ω ≤|ψ|0;Ω
λ(Diam (Ω))2.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.5 Existence Theorems425
It is easy to see that the corresponding estimates for F (D2u) = σψ, σ ∈ [0, 1] are inde-
pendent of σ. Thus the result follows from the method of continuity.
Remark: Once you get u ∈ C2(Ω), then the linear theory ensures u ∈ C2,α(Ω) for all
α < 1.
The above proof owes its validity to the regularity of F . If you want to examine a non-smooth not the most obvi-
ous argument actu-
ally;explain it
F , you need to mollify F ; replace F by
Fε(r) =
∫
Snρ
(r − sε
)F (s) ds,
where ρ ∈ C∞0 (Sn) is defined by
ρ ≥ 0,
∫
Snρ(s) ds = 1.
Observe that Fε satisfies the assumptions (i)(ii). Then solve the problem
Fε(D
2uε) = ψ in Ω
uε = 0 on ∂Ω,
and let ε→ 0. The examples include the following Bellman equation:
F (D2u) = infk=1,...,n
ai jkDi ju.
Next, we wish to modify the argument to include the non-zero boundary value problems,
especially with continuous boundary values:
F (D2u) = ψ in Ω
u = g on ∂Ω(13.36)
where g ∈ C0(∂Ω).
As a first step,we approximate g by gn ⊂ C2(Ω) such that gn → g uniformly on ∂Ω
as n →∞. Secondly, in view of ∂Ω ∈ C3, you can construct barriers; for any y ∈ ∂Ω, there
exists a neighbourhood Ny of y and a function w ∈ C2(Ny ) such that
i.) w > 0 in Ω ∩Ny − y
ii.) w(y) = 0,
iii.) and F (D2w) ≤ ψ in Ω ∩Ny .
(refer to peron process for proofs about barries).
Suppose un ⊂ C2,α(Ω) ∩ C0(Ω) are given by the solutions of
F (D2un) = ψ in Ω
un = gn on ∂Ω
Then we can appeal to the linear theory to conclude that un are equicontinuous on Ω,
taking into account (13.34) and the existence of barriers.
On the other hand, the Schauder estimate (put the theorem here) implies the equicon-
tinuity of Du, D2u on any compact subset of Ω. Therefore, by letting n → ∞, we obtain
u ∈ C0(Ω) ∩ C2,α(Ω) which enjoys (13.36).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
426Chapter 13: Non-Variational Methods in Nonlinear PDE
13.5.3 Monge-Ampere equation
The equation we wish to examine here is
detD2u = ψ in Ω, (13.37)
where Ω ⊂ Rn is a iniformly convex domain with ∂Ω ∈ C3.
(13.37) is elliptic with respect to u provided D2u > 0. Thus the set U of admissable
functions is defined to be
U := u ∈ C2,α(Ω) |D2u > 0.
Now our main result is the next
Theorem 13.5.6 ((Krylov [Kry83a, Kry83b], Caffarelli, Nirenberg and Spruck [CNS84])):
Let Ω ⊂ Rn be a uniformly convex domain with ∂Ω ∈ C3 and let ψ ∈ C2(Ω) be such that
ψ > 0 in Ω. Then there exists a unique solution u ∈ U of
detD2u = ψ in Ω
u = 0 on ∂Ω.(13.38)
For the rest of this subsection, we prove this theorem, employing apriori estimates step-
by-step Proof:
Step 1: |u|2;Ω estimate implies |u|2,α;Ω estimate.
First, we show the concavity of F (r) := (det r)1/n for r > 0. Recalling,
(detD2u)1/n(detA)1/n ≤ 1
n(ai jDi ju),
we have
(detD2u)1/n = infdetA≥1
1
n(ai jDi ju).
This proves the concavity of F , since the RHS is a concavity function of D2u.
Next recall detD2u =∏ni=1 λi , where λ1, . . . , λn are the eigenvalue of D2u.
Thus we have
λi =ψλi∏nj=1 λj
≥ inf ψ
(sup |D2u|)n−1,
which means that the C2 boundedness of u ensures the uniform ellipticity of the equation.
The (globals C2,α estimate theorem thingy) then implies
|u|2,α;Ω ≤ C,
where C = C(|u|2;Ω, |ψ|2;Ω, ∂Ω). This proves Step 1.
Step 2: |u|0;Ω and |u|1;Ω estimates.
Since u is convex, we have u ≤ 0 in Ω. Let w := A|x |2 − B and compute
detD2w = (2nA)n.
Taking A, B large enough, we find
detD2w ≥ ψ in Ω
w ≤ u on ∂Ω,Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.5 Existence Theorems427
from which we obtain |u|0;Ω ≤ C by applying the maximum principle.
Next we want to give a bound for |u|1;Ω. The convexity implies
supΩ|Du| ≤ sup
∂Ω|Du|.
Thus it sufficies to estimate Du on the boundary ∂Ω. More precisely, we with to estimate
uν from above, where uν denotes the exterior normal derivative of u on ∂Ω. This is because
uν ≥ 0 on ∂Ω and the tangential derivatives are zero.
We exploit the uniform convexity of the domain Ω; for any y ∈ ∂Ω, there exists a C2
function w defined on a ball Ω ⊂ By such that w ≤ 0 in Ω, w(y) = 0 and detD2w ≥ ψ in
Ω. This can be seen by taking Ω ⊂ By with ∂By ∩Ω = y and considering the function of
the type A|x − x0|2 − B as before.
Therefore, we infer
w ≤ u in Ω
and in particular wν ≥ uν at y . Since ∂Ω is compact, we obtain |u|1;∂Ω estimate.
Step 3: sup∂Ω |D2u| estimate implies supΩ |D2u| estimate.
We use the concavity of
F (D2u) := (detD2u)1/n = ψ1/n.
as in (the linearization first part), we deduce
Fi jDi jDγγu ≥ Dγγ(ψ1/n) for |γ| = 1.
Then the result of (the linear app of AB max prin) leads to
Dγγu ≤ sup∂ΩDγγu + C · sup
Ω
|Dγγ(ψ1/n)|T r(Fi j)
.
On the other hand, we have
Fi j =1
n(detD2u)−1+1/n · ui j ,
where [ui j ] denotes the cofactor matrix of [ui j ]. Compute
detFi j =1
nn
T rFi jn
≥ (detFi j)1/n =
1
n.
Thus, there holds
Dγγu ≤ sup∂ΩDγγu + C,
where C depends only on the know quantitites. This proves Step 3.
Step 4: sup∂Ω |D2u| estimate.
Take any y ∈ ∂Ω. Without loss of generality, we may suppose that y is the origin and
the xn-axis points the interior normal direction.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
428Chapter 13: Non-Variational Methods in Nonlinear PDE
Near the origin, ∂Ω is represented by
xn = ω(x ′) :=1
2Ωαβxαxβ +O(|x ′|3).
Here x ′ := (x1, . . . , xn−1) and in the summation, Greek letters go from 1 to n − 1. The
uniform convexity of ∂Ω is equivalent to the positivity of Ωαβ. For more details, see (section
after next).
Our aim is an estimation for |ui j(0)|. First, we see that the boundary condition u = 0 on
∂Ω implies
(Dβ + ωαDn)(Dα + ωβDn)u = 0
on ∂Ω near the origin; in particular,
|Dαβu(0)| ≤ C
at the origin.Next,differential the equation (13.43) to obtain
ui jDi j(Dαu + ωαDnu) = Dαψ + ωαDnψ + 2ui jωαiDjnu + ui jDnuDi jω.
By virtue of ui jDjnu = δin detD2u,there holds
|ui jDi j(Dα + ωαDnu)| ≤ C(1 + T rui j),
in Ω ∩N denotes a suitable neighbourhood of the origin.
On the other hand, introduce a barrier function of the type
w(x) := −A|x |2 + Bxn,
and compute
ui jDi jw = −2ATrui j .
Choosing A and B large enough, we conclude that
|ui jDi j(Dαu + ωαDnu)|+ ui jDi jw ≤ 0 in Ω ∩N|Dαu + ωαDnu| ≤ w on ∂(Ω ∩N ).
By the maximum principle, we now find
|Dαu + ωαDnu| ≤ w inΩ ∩N
and hence
|Dnαu(0)| ≤ B.
To summarize, we have so far derived
|Dklu(0)| ≤ C if k + l < 2n (13.39)
Finally, we want to estimate |Dnnu(0)|. Since u = 0 on ∂Ω, the unit outer normal ν on ∂Ω
is given by
ν =Du
|Du|Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.5 Existence Theorems429
and so
D2u(0) =
(|Du|D′ν Dinu
Dniu Dnnu
)(x), (13.40)
where D′ := (∂/∂x1, . . . , ∂/∂xn−1). The choice of our coordinates gives
D′ν(0) = Diag[κ′1, . . . , κ
′n−1
](0)
and |Du(0)| = |Dnu(0)|. Here κ′1, . . . , κ′n−1 denote the principal curvatures of ∂Ω at 0. (see
section after next). By our regularity assumption on ∂Ω, for any y ∈ ∂Ω, there exists an
interior ball By ⊂ Ω with ∂By ∩ ∂Ω = y. Thus the comparison argument as before yields
|Du| ≥ δ > 0 (13.41)
for some δ > 0.
In view of (13.40), we have
ψ(0) = detD2u(0)
= |Dnu|n−2n−1∏
i=1
κ′i
|Dnu|Dnnu −
n−1∑
i=1
(Dinu)2
κ′i
(0),
from which we infer
0 < Dnnu(0) ≤ C, (13.42)
taking into account (13.41) and (13.39) and the uniform convexity κ′i > 0. This proves our
claim.
Now we have established an estimate for |u|2,α;Ω and the method of continuity can be
applied to compete the proof.
13.5.4 Degenerate Monge-Ampere Equation
Here we want to generalize (the previous theorem) so that ψ merely enjoys ψ ≥ 0 in Ω.
In the proof of (the above theorem), the steps 2 and 3 are not altred. The step 4 is also
as before except for the estimation of Dnu away from zero (see (13.41), which follows from
the convexity of u and being able to assume ψ(x0) > 0 for some x0 ∈ Ω. The step 1 does
not hold true in general.
To deal with these points, we approximate ψ by ψ + ε for ε > 0 sufficiently small.
Theorem (above) is applicable and we obtain a convex C2,α solution uε. We with to make
ε→ 0. However, an estimation independent of ε can be obtained up to |u|1;Ω.
In summarly, by virtue of the convexity of uε, we derive
Theorem 13.5.7 (): Let Ω ⊂ Rn be a uniformly convex domain with ∂Ω ∈ C3 and let
ψ ∈ C2(Ω) be non-negative in Ω. Then there exists a unique solution u ∈ C1,1(Ω) of
detD2u = ψn in Ω
u = 0 on ∂Ω.(13.43)
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
430Chapter 13: Non-Variational Methods in Nonlinear PDE
13.5.5 Bondary Curvatures
Here we collect some properties of boundary curvatures and the distance function, which are
taken frpm the Appendix in [GT01, Chapter 14].
Let Ω ⊂ Rn be a bounded domain with its boundary ∂Ω ∈ C2. Take any rotation
of coordinate if necessary, the xn axis lies in the direction of the inner normal to ∂Ω at
the origin.Also, there exists a neighbourhood N of the origin such that ∂Ω is given by the
height function xn = ω(x ′) whre x ′ := (x1, . . . , xn−1). Moreover, we can assume that h is
represented by
ω(x ′) =1
2Ωαβxαxβ +O(|x ′|3), Ωαβ = Dαβω(0),
in N ∩ T where T denotes the tangent hyperplane to ∂Ω at the origin.
The principal curvatures of ∂Ω at the origin are now defined as the eigenvalues of [h]i jand we denote the κ′1, . . . , κ
′n−1. The corresponding eigenvectors are called the principal
directions.
We define the tangential gradient ∂ on ∂Ω as
∂ := D − ν(ν ·D),
where ν denotes the unit outer normal to ∂Ω. ν can be extended smoothly in a neighbourhood
of ∂Ω. In terms of ∂, we infer that κ′1, . . . , κ′n−1, 0 are given by the eigenvalues of [∂ν].
Let d(x) := Dist (x, ∂Ω) for x ∈ Ω. Then we find d ∈ C0,1(Ω) and d ∈ C20 < d < awhere a := 1/max∂Ω|κ′|. We call d(x) the distance function. Remark that ν(x) = −Dd(x)
for x ∈ ∂Ω.
A principal coordinate system at y ∈ ∂Ω consists of the xn axis being along the unit
normal and others along the principal directions. For x ∈ x ∈ Ω | 0 < d(x) < a, let
y = y(x) ∈ ∂Ω be such that |y(x)− x | = d(x). Then with respect to a principal coordinate
system at y = y(x), we have
[D2d(x)] = Diag
[ −κ′11− dκ′1
, . . . ,−κ′n−1
1− dκ′n−1
, 0
],
and for α, β = 1, 2, . . . , n − 1,
Dαβu = ∂α∂βu −Dnu · κ′αδαβ= ∂α∂βu −Dnu ·Ωαβ.
If u = 0 on ∂Ω, then Dαβu = −Dnu ·Ωαβ for α, β = 1, 2, . . . , n − 1 at y = y(x).
13.5.6 General Boundary Values
The estimations in (ma equation) extend automatically to the case of general Dirichlet bound-
ary values given by a function φ ∈ C3(Ω), except for the double normal second derivative on
the boundary. Here the reultant estimate (for φ ∈ C3,1(Ω)) is due to Ivochkina [Ivo90]. We
indicate here a simple proof of this estimate using a recent trick. introduced by Trudinger in
[Tru95], in the study of Hessian and curvature equations. Accordingly, les us suppose that
u ∈ C3(Ω) is a convex solution of the Dirichlet problem,
detD2u = ψ in Ω
u = φ on ∂Ω.(13.44)
Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.6 Applications431
with ψ ∈ C2(Ω), infΩ ψ > 0 as before, and φ ∈ C4(Ω), ∂Ω ∈ C4. Fix a point y ∈ ∂Ω and a
principal coordinate system at y . Then the estimation of Dnnu(y) is equivalent to estimation
away from zero of the minimum eigenvalue λ′ of the matrix
Dαβu = ∂α∂β −Dnu ·Ωαβ, 1 ≤ α, β ≤ n − 1.
Nor for ξ = (ξ1, . . . , ξn−1), |ξ| = 1, the function
w = Dnu −∂α∂βφξαξβ
Ωαβξαξβ− A|x ′|2
will take a maximum value at a point z ∈ N ∩ ∂Ω, for sufficiently large A, depending on N ,
max |Du|, max |D2φ| and ∂Ω. But then an estimate
Dnnu(z) ≤ C
follows by the same barrier considerations as before, whence λ′(z) is estimated away from
zero and consequently λ′(y) as required. Note that this argument does not extend to the
degenerate case. We thus establish the following extension of (non-deg monge-ampere exist
theorem) to general boundary values [CNS84]
Theorem 13.5.8 (): Let Ω be a uniformly convex domain with ∂Ω ∈ C4, φ ∈ C4(Ω) and
let ψ ∈ C2(Ω) be such that ψ > 0 in Ω. Then there exists a unique solution u ∈ U of the
Dirichlet problem:
detD2u = ψ in Ω
u = 0 on ∂Ω..
13.6 Applications
This section is concerned with some applications and extensions.
13.6.1 Isoperimetric inequalities
We give a new proof of the isoperimetric inequalities via the existence theorem of Monge-
Ampere equations.
Theorem 13.6.1 (): Let Ω be a bounded domain in Rn with its boundary ∂Ω ∈ C2. Then
we have
|Ω|1−1/n ≤ 1
nωnHn−1(∂Ω), (13.45a)
|Ω|1−2/n ≤ f rac1n(n − 1)ω2/nn
∫
∂Ω
H′ dHn−1 if H′ ≥ 0 on ∂Ω, (13.45b)
where | · | andHn−1( · ) denote the usual Lebesgue measure on Rn and the (n−1)-dimentional
Hausdorff measure, respectively. H′ is the mean curvature of ∂Ω; i.e.
H′ :=
n−1∑
i=1
κ′i .
These inequalities are sharp on balls.
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
432Chapter 13: Non-Variational Methods in Nonlinear PDE
(13.45a) is the classical isoperimetric inequality. We show the proof employing the
Monge-Ampere existence theorem. (13.45b) is given in [Tru94].
Proof. We may assume without loss of generality that 0 ∈ Ω. Let Ω ⊂⊂ Br (0) ⊂⊂ BR(0).
We want to solve
det
[D2u
]= χ in BRu = 0 on ∂Ω,
(13.46)
where χ := χΩ denotes the characteristic function of Ω.
First we approximate χ by ψ ∈ C2(BR)
with
0 ≤ ψ ≤ χ on BR;
ψ ≡ 1 on Ωδ := x ∈ Ω | Dist (x, ∂Ω) > γ ,
for sufficiently small δ. From Theorem 13.5.7, we have a solution u ∈ C1,1(BR)
of (13.46),
where χ is replaced by ψ. Moreover, u ∈ C3(x ∈ BR |ψ(x) > 0).
Proof of (13.45a). The Alexandrov maximum principle implies
ωn supΩ|u|n ≤ (R + r)n
∫
BR
det[D2u
].
Since u is convex, we deduce
ωn supΩ|Du|n ≤ ωn
supΩ |u|n(R − r)n
≤(R + r
R − r
)n ∫
BR
det[D2u
]=
(R + r
R − r
)n|Ω|,
and hence
supΩ|Du| ≤ 1
ω1/nn
(R + r
R − r
)|Ω|1/n. (13.47)
By the arithmetic-geometric inequality, we have
∆u
n≥
(det
[D2u
])1/n
= 1 on Ωδ.
An integration gives
|Ωδ| ≤1
n
∫
Ωδ
∆u =1
n
∫
∂Ωδ
ν ·Du dHn−1
≤ 1
nω1/nn
Hn−1(∂Ωδ)
(R + r
R − r
)|Ω|1/n,
by virtue of (13.47). Here ν denotes the unit outer normal. Letting R→∞ and δ → 0, we
find the desired result.Gregory T. von Nessi
Versio
n::3
11
02
01
1
13.6 Applications433
Proof of (13.45b). Recalling the inequality
(n∏
i=1
λi
)1/n
≤
1
n(n − 1)
∑
i<j
λiλj
1/2
for λi ≥ 0, we see that
(det
[D2u
])1/n ≤ 1
n(n − 1)
(∆u)2 −
∑
i ,j
(Di ju)2
.
Integrating the left hand side of the last inequality, we obtain
∫
Ω
(∆u)2 −
∑
i ,j
(Di ju)2
dx =
∫
Ω
(Di iuDj ju −Di juDi ju) dx
= −∫
Ω
Diu(Di j ju −Di j ju) dx
+
∫
∂Ω
(νiDiuDj ju − νjDiuDi ju) dHn−1
=
∫
∂Ω
(ν ·Du)∆u − νiDjuDi ju
dHn−1. (13.48)
For any y ∈ ∂Ω, take the principle coordinate system at y . Again, noting that ∂ denotes
the tangential gradient, then there holds
(13.48) =
∫
∂Ω
(ν ·Du)(∂i∂iu + (ν ·Du)H′)−
∑
i+j<2n
νjDiuDi ju
dHn−1
=
∫
∂Ω
(ν ·Du)2H′ + (ν ·Du)∂i∂iu −DiuDi(ν ·Du)
dHn−1
=
∫
∂Ω
(ν ·Du)2H′ + (ν ·Du)∂i∂iu − ∂iu∂i(ν ·Du)
dHn−1
=
∫
∂Ω
(ν ·Du)2H′ + 2(ν ·Du)∂i∂iu
dHn−1,
by virtue of the integration formula (see [GT01, Lemma 16.1])∫
∂Ω
∂iφdHn−1 =
∫
∂Ω
H′νiφdHn−1. (13.49)
Since the convexity of u implies
(ν ·Du)H′ + ∂i∂iu = Di iu ≥ 0,
we obtain for H′ ≥ 0∫
Ω
(∆u)2 − |D2u|2
dx ≤
∫
∂Ω
(K2H′ − 2K∂i∂iu) dHn−1
≤ K2
∫
∂Ω
H′ dHn−1,
where K := max∂Ω |ν ·Du|, by (13.49) again.
Using (13.47), replacing Ω by Ωδ and letting δ → 0, R → 0 as before, we obtain
(13.45b).
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
434Chapter 13: Non-Variational Methods in Nonlinear PDE
13.7 Exercises
12.1: Let u ∈ C2(Ω) satisfy u ≥ 0 on ∂Ω,
C2[x ] := ∆u − νiνjDi ju ≥ 0 in Ω,
where
ν :=
Du|Du| if |Du| 6= 0
0 otherwise.
Using (13.45b) show that
||u||L
nn−2 (Ω)
≤ 1
n(n − 1)ω2/nn
∫
Ω
C[u].
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Appendices
Ver
sio
n::
31
10
20
11
436Appendix A: Notation
A Notation
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan RickmanFor derivatives D = ∇ and ~Du =
(∂∂x1u, . . . , ∂
∂xnu)
.
Gregory T. von Nessi
Versio
n::3
11
02
01
1
Ver
sio
n::
31
10
20
11
438Bibliography
Bibliography
[Ant80] Stuart S. Antman, The equations for large vibrations of strings, Amer. Math.
Monthly 87 (1980), no. 5, 359–370. MR MR567719 (81e:73027) 1.6
[Bre97] Glen E. Bredon, Topology and geometry, Graduate Texts in Mathematics, vol. 139,
Springer-Verlag, New York, 1997, Corrected third printing of the 1993 original. MR
MR1700700 (2000b:55001) 8.7
[CNS84] L. Caffarelli, L. Nirenberg, and J. Spruck, The Dirichlet problem for nonlinear
second-order elliptic equations. I. Monge-Ampere equation, Comm. Pure Appl.
Math. 37 (1984), no. 3, 369–402. MR MR739925 (87f:35096) 13.5.6, 13.5.6
[Eva82] Lawrence C. Evans, Classical solutions of fully nonlinear, convex, second-order ellip-
tic equations, Comm. Pure Appl. Math. 35 (1982), no. 3, 333–363. MR MR649348
(83g:35038) 13.4
[Gia93] Mariano Giaquinta, Introduction to regularity theory for nonlinear elliptic sys-
tems, Lectures in Mathematics ETH Zurich, Birkhauser Verlag, Basel, 1993. MR
MR1239172 (94g:49002) 9.4
[Giu03] Enrico Giusti, Direct methods in the calculus of variations, World Scientific Pub-
lishing Co. Inc., River Edge, NJ, 2003. MR MR1962933 (2004g:49003) 9.4
[GT01] David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second
order, Classics in Mathematics, Springer-Verlag, Berlin, 2001, REPRINT of the
1998 edition. MR MR1814364 (2001k:35004) 8.3, 13.4, 13.4.2, 13.5.5, 13.6.1
[Hen81] Daniel Henry, Geometric theory of semilinear parabolic equations, Lecture Notes in
Mathematics, vol. 840, Springer-Verlag, Berlin, 1981. MR MR610244 (83j:35084)
10.9
[Ivo90] N. M. Ivochkina, The Dirichlet problem for the curvature equation of order m,
Algebra i Analiz 2 (1990), no. 3, 192–217. MR MR1073214 (92e:35072) 13.5.6Gregory T. von Nessi
Versio
n::3
11
02
01
1
Bibliography439
[Kry82] N. V. Krylov, Boundedly inhomogeneous elliptic and parabolic equations, Izv.
Akad. Nauk SSSR Ser. Mat. 46 (1982), no. 3, 487–523, 670. MR MR661144
(84a:35091) 13.4, 13.5.5
[Kry83a] , Degenerate nonlinear elliptic equations, Mat. Sb. (N.S.) 120(162)
(1983), no. 3, 311–330, 448. MR MR691980 (85f:35096) 13.5.6
[Kry83b] , Degenerate nonlinear elliptic equations. II, Mat. Sb. (N.S.) 121(163)
(1983), no. 2, 211–232. MR MR703325 (85f:35097) 13.5.6
[Kry87] , Nonlinear elliptic and parabolic equations of the second order, Mathe-
matics and its Applications (Soviet Series), vol. 7, D. Reidel Publishing Co., Dor-
drecht, 1987, Translated from the Russian by P. L. Buzytsky [P. L. Buzytskiı]. MR
MR901759 (88d:35005) 13.4.4
[Paz83] A. Pazy, Semigroups of linear operators and applications to partial differential equa-
tions, Applied Mathematical Sciences, vol. 44, Springer-Verlag, New York, 1983.
MR MR710486 (85g:47061) 10.9
[Roy88] H. L. Royden, Real analysis, third ed., Macmillan Publishing Company, New York,
1988. MR MR1013117 (90g:00004) 8.2, 4, 8.8, 8.8.1, 8.8.2
[RR04] Michael Renardy and Robert C. Rogers, An introduction to partial differential equa-
tions, second ed., Texts in Applied Mathematics, vol. 13, Springer-Verlag, New
York, 2004. MR MR2028503 (2004j:35001) 10.10, 11.1
[Rud91] Walter Rudin, Functional analysis, second ed., International Series in Pure and
Applied Mathematics, McGraw-Hill Inc., New York, 1991. MR MR1157815
(92k:46001) 8.2, 8.2, 8.2
[Saf84] M. V. Safonov, The classical solution of the elliptic Bellman equation, Dokl. Akad.
Nauk SSSR 278 (1984), no. 4, 810–813. MR MR765302 (86f:35081) 13.4, 13.4.1
[TP85] Morris Tenenbaum and Harry Pollard, Ordinary differential equations, Dover Pub-
lications Inc., Mineola, NY, 1985, New Ed Edition. 8.2
[Tru84] Neil S. Trudinger, Boundary value problems for fully nonlinear elliptic equations,
Miniconference on nonlinear analysis (Canberra, 1984), Proc. Centre Math. Anal.
Austral. Nat. Univ., vol. 8, Austral. Nat. Univ., Canberra, 1984, pp. 65–83. MR
MR799213 (87a:35083) 13.4.5
[Tru94] , Isoperimetric inequalities for quermassintegrals, Ann. Inst. H. Poincare
Anal. Non Lineaire 11 (1994), no. 4, 411–425. MR MR1287239 (95k:52013)
13.6.1
[Tru95] , On the Dirichlet problem for Hessian equations, Acta Math. 175 (1995),
no. 2, 151–164. MR MR1368245 (96m:35113) 13.5.6
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi
440Index
Index
adjoint operator, see operator, adjoint
almost everywhere
criterion, 202
almost orthogonal element, 185
Arzela-Ascoli Theorem, 189
Banach Space, 181
reflexive, 192
seperable, 182
Banach’s fixed point theorem, 182
Banach-Alaoglu Theorem, 199
Beppo-Levi Convergence Theorem, see Mono-
tone Convergence Theorem
bidual, 192
bilinear form, 195
bounded, 195
coercive, 195
Caccioppoli inequalities, 285
Caccioppoli’s inequality
elliptic systems, for, 288
Laplace’s Eq., for, 286
Cauchy-Schwarz inequality, 193
compact operator, see operator, compact
Companato Spaces, 223
L∞, 229
conservation law, 5
contraction, 182
dual space, 192
canonical imbedding, 192
Egoroff’s Theorem, 203
eigenspace, 189
eigenvalue, 189
multiplicity, 189
eigenvector, 189
Einstein’s Equations, 5
Fatou’s Lemma, 203
Fourier’s Law of Cooling, 4
Fredholm Alternative
Hilbert Spaces, for, 198
normed linear spaces, for, 186–188
Fubini’s Theorem, 203
Heat Equation, 3
steady state solutions, 4
heat flux, 3
Hilbert Space, 193
inner product, see scalar product
integrable function, 202
Laplace’s Equation, 4
Lax-Milgram theorem, 195
Lebesgue’s Dominated Convergence Theorem,
203
Legendre-Hadamard condition, 288
Lioville’s Theorem, 288
Maxwell’s Equations, 5
measurable
function, 202Gregory T. von Nessi
Versio
n::3
11
02
01
1
Index441
measure, 201
Hausdorff, 201
Lebesgue, 201
outer, 201
Method of Continuity
for bounded linear operators, 184
metric, 181
minimizing sequence, 194
Monotone Convergence Theorem, 203
Morrey Space, 222
Noether’s Theorem, 5
norm, 181
Normed Linear Space, 181
open
ball, 181
set, 181
operator
adjoint, 192
bounded, 183
compact, 185
parallelogram identity, 194
Picard-Lindelof Theorem, 183
projection theorem, 194
regularity
linear elliptic eq., for, 285–310
resolvent operator, 189
resolvent set, 189
Riesz-representation theorem, 194
scalar product, 193
Scalar Product Space, 193
Schrodinger’s Equation, 5
set
Lebesgue measurable, 201
σ-ring, 201
simple function, 202
spectral theorem
compact operators, for, 189
spectrum
compact operator, of a, 189
Topological Vector Space, 181
traffic flow, 5
Lighthill-Whitham-Richards model, 6
Transport Equation, 7
triangle inequality, 181, 194
weak convergence, 198
weak-* convergence, 199
Ver
sio
n::
31
10
20
11
Gregory T. von Nessi