math 150b di erential geometrybdriver/150b_w2020/lecture notes...0.9 homework 9 = take home final...

119
Bruce K. Driver Math 150B Differential Geometry March 5, 2020 File:150BNotes.tex

Upload: others

Post on 31-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Bruce K. Driver

Math 150B Differential Geometry

March 5, 2020 File:150BNotes.tex

Page 2: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 3: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Contents

Part Homework Problems

0 Math 150B Homework Problems: Winter 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.0 Homework 0, Due Wednesday, January 8, 2020 (Not to be collected) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.1 Homework 1. Due Thursday, January 16, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2 Homework 2. Due Thursday, January 23, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.3 Homework 3. Due Thursday, January 30, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.4 Homework 4. Due Thursday, February 6, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.5 Homework 5. Due Thursday, February 13, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.6 Homework 6. Due Thursday, February 20, 2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.7 Homework 7. Now Due Friday, February 28, 2020 at 7:00PM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.8 Homework 8. Due Friday, March 6, 2020 at 7:00PM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Part I Background Material

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Permutations Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Integration Theory Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 *Appendix: Another approach to the linear change of variables theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Part II Multi-Linear Algebra

4 Properties of Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Page 4: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

4 Contents

5 Multi-linear Functions (Tensors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.1 Basis and Dual Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.2 Multi-linear Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Alternating Multi-linear Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.1 Structure of Λn (V ∗) and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.2 Determinants of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.3 The structure of Λk (V ∗) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7 Exterior/Wedge and Interior Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.1 Consequences of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.2 Interior product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367.4 *Proof of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Part III Differential Forms on U ⊂o Rn

8 Derivatives, Tangent Spaces, and Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.1 Derivatives and Chain Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438.2 Tangent Spaces and More Chain Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458.3 Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478.4 Vector-Fields and Interior Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498.5 Pull Backs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508.6 Exterior Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

9 An Introduction of Integration of Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559.1 Integration of Forms Over “Parameterized Surfaces” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559.2 The Goal (Degree Theorem): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

10 Elements of Point Set (Metric) Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5910.1 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5910.2 Closed and Open Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6110.3 Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

10.3.1 *More optional results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6410.4 Proper maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6510.5 Connectedness and the Intermediate Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

11 Degree Theory and Change of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6911.1 Constructing elements of C∞c (open rectangles) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6911.2 Smooth Uryshon’s Lemma and Partitions of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7011.3 The statement of the degree theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7111.4 Proof of the Degree Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Page: 4 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 5: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Contents 5

11.5 Degree’s of Diffeomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7411.6 *Appendix: Constructing the dominating function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

12 The Poincare Lemma Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7712.1 Differentiating past the integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7712.2 A Local Poincare Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7712.3 Poincare Lemma for connected sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Part IV Calculus on Manifolds

13 Manifold Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8313.1 Imbedded Submanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8313.2 Tangent Planes and Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

14 Integration over Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9114.1 Orientation and Integrating Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9114.2 Manifolds with Boundary and Boundary Orientations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9314.3 Stoke’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9314.4 Example of Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

15 Final Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9715.1 Metric Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9715.2 Exact ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9715.3 Integration on Manifolds Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Part V Appendices

A *More metric space results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.1 Compactness in Euclidean Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.2 Open Cover Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.3 Equivalence of Sequential and Open Cover Compactness in Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105A.4 Connectedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

A.4.1 Connectedness Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107A.5 *More on the closure and related operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

B Smoothing by Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Page: 5 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 6: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 7: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part

Homework Problems

Page 8: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 9: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

0

Math 150B Homework Problems: Winter 2020

Problems are our text book, “Differential Forms,” by Guillemin and Haineor from the lecture notes as indicated. The problems from the lecture notes arerestated here, however there may be broken references. If this is the case, pleasefind the corresponding problem in the lecture notes for the proper referencesand for more context of the problem.

0.0 Homework 0, Due Wednesday, January 8, 2020 (Notto be collected)

• Lecture note Exercises: 3.1, 3.2, 3.3, 3.4, and 3.5.

0.1 Homework 1. Due Thursday, January 16, 2020

• Lecture note Exercises: 3.6, 5.1, 5.2, 5.5• Book Exercises: 1.2.vi.

0.2 Homework 2. Due Thursday, January 23, 2020

• Lecture note Exercises: 5.3, 5.4, 6.1, 6.2, 6.3, 6.4, 6.5• Book Exercises: 1.3.iii., 1.3.v., 1.3.vii, 1.4ix

0.3 Homework 3. Due Thursday, January 30, 2020

• Lecture note Exercises: 7.2, 7.3, 8.1, 8.2, 8.3, 8.4, 8.5• Look at (but don’t hand in) Exercises 7.4, 7.5 and the Book Exercises:

1.7.iv., 1.8vi.

0.4 Homework 4. Due Thursday, February 6, 2020

These problems are part of your midterm and are to be worked on by your-self.These are due at the start of the in-class portion of the midterm which is inclass on Thursday February 6, 2020.

• Lecture note Exercises: 6.6, 6.7, 7.1, 8.6, 8.7

0.5 Homework 5. Due Thursday, February 13, 2020

• Lecture note Exercises: 8.8, 8.9, 8.10, 8.11, 8.12, 8.13• Book Exercises: 2.3.ii., 2.3.iii., 2.4.i

0.6 Homework 6. Due Thursday, February 20, 2020

• Book Exercises: 2.1vii, 2.1viii, 2.4.ii, 2.4iii, 2.4iv. 2.6i, 2.6ii, 2.6iii (Refer toexercise 2.1.vii not 2.2viii), 3.2.i, 3.2viii

• Have a look at Reyer Sjamaar’s notes: Manifolds and Differential Forms –especially see Chapter 6 starting on page 75 for the notions of a manifold,tangent spaces, and lots of pictures!

0.7 Homework 7. Now Due Friday, February 28, 2020 at7:00PM

• Hand in Lecture note Exercises: 9.1 (now corrected), 10.1, 10.2, 10.3, 10.4,10.6, 10.7, 10.8, 10.17, 10.18

• Look at but do not turn in Lecture note Exercise: 11.1

0.8 Homework 8. Due Friday, March 6, 2020 at 7:00PM

• Hand in Lecture note Exercises: 11.2, 11.3, 11.4, 11.5• Look at but do not turn in Lecture note Exercise: 10.9 (done in class!)

0.9 Homework 9 = Take Home Final Exam: Due TuesdayMarch 17 at 8AM

These problems are part of your Final Exam and are to be worked on by your-self. These are due at by the start of final exam on Tuesday March 17 from8-11 AM.

• Hand in Lecture note Exercises: 15.1, 15.2, 15.3, 15.4, 15.5, 15.6, 15.7, 15.8

and 15.9

Page 10: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 11: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part I

Background Material

Page 12: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 13: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

1

Introduction

This class is devoted to understanding, proving, and exploring the multi-dimensional and “manifold” analogues of the classic one dimensional funda-mental theorem of calculus and change of variables theorem. These theoremstake on the following form;∫

M

dω =

∫∂M

ω ←→∫ b

a

g′ (x) dx = g (x) |ba and (1.1)∫M

f∗ω = deg (f) ·∫N

ω ←→∫ b

a

g (f (x)) f ′ (x) dx =

∫ f(b)

f(a)

g (y) dy.

(1.2)

In meeting our goals we will need to understand all the ingredients in the aboveformula including;

1. M is a manifold.2. ∂M is the boundary of M.3. ω is a differential form an dω is its differential.4. f∗ω is the pull back of ω by a “smooth map” f : M → N.5. deg (f) ∈ Z is the degree of f.6. There is also a hidden notion of orientation needed to make sense of the

above integrals.

Remark 1.1. We will see that Eq. (1.1) encodes (all wrapped into one neatformula) the key integration formulas from 20E: Green’s theorem, Divergencetheorem, and Stoke’s theorem.

Page 14: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 15: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

2

Permutations Basics

The following proposition should be verified by the reader.

Proposition 2.1 (Permutation Groups). Let Λ be a set and

Σ (Λ) := σ : Λ→ Λ| σ is bijective .

If we equip G with the binary operation of function composition, then G is agroup. The identity element in G is the identity function, ε, and the inverse,σ−1, to σ ∈ G is the inverse function to σ.

Definition 2.2 (Finite permutation groups). For n ∈ Z+, let [n] :=1, 2, . . . , n , and Σn := Σ ([n]) be the group described in Proposition 2.1. Wewill identify elements, σ ∈ Σn, with the following 2× n array,(

1 2 . . . nσ (1) σ (2) . . . σ (n)

).

(Notice that |Σn| = n! since there are n choices for σ (1) , n− 1 for σ (2) , n− 2for σ (3) , . . . , 1 for σ (n) .)

For examples, suppose that n = 6 and let

ε =

(1 2 3 4 5 61 2 3 4 5 6

)– the identity, and

σ =

(1 2 3 4 5 62 4 3 1 6 5

).

We identify σ with the following picture,

1

2

''

3

4

uu

5

6

1 2 3 4 5 6

.

The inverse to σ is gotten pictorially by reversing all of the arrows above tofind,

1 2 3 4 5 6

1

55

2

^^

3

OO

4

gg

5

@@

6

^^

or equivalently,

1

))

2

3

4

ww

5

6

1 2 3 4 5 6

and hence,

σ−1 =

(1 2 3 4 5 64 1 3 2 6 5

).

Of course the identity in this graphical picture is simply given by

1

2

3

4

5

6

1 2 3 4 5 6

Now let β ∈ S6 be given by

β =

(1 2 3 4 5 62 1 4 6 3 5

),

or in pictures;

1

2

3

4

''

5

ww

6

1 2 3 4 5 6

We can now compose the two permutations β σ graphically to find,

1

2

''

3

4

uu

5

6

1

2

3

4

''

5

ww

6

1 2 3 4 5 6

Page 16: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10 2 Permutations Basics

which after erasing the intermediate arrows gives,

1

2

++

3

4

ww

5

6

uu1 2 3 4 5 6

.

In terms of our array notation we have,

β σ =

(1 2 3 4 5 62 1 4 6 3 5

)(

1 2 3 4 5 62 4 3 1 6 5

)=

(1 2 3 4 5 61 6 4 2 5 3

).

Remark 2.3 (Optional). It is interesting to observe that β splits into a productof two permutations,

β =

(1 2 3 4 5 62 1 3 4 5 6

)(

1 2 3 4 5 61 2 4 6 3 5

)=

(1 2 3 4 5 61 2 4 6 3 5

)(

1 2 3 4 5 62 1 3 4 5 6

),

corresponding to the non-crossing parts in the graphical picture for β. Each ofthese permutations is called a “cycle.”

Definition 2.4 (Transpositions). A permutation, σ ∈ Σk, is a transposi-tion if

# l ∈ [k] : σ (l) 6= l = 2.

We further say that σ is an adjacent transposition if

l ∈ [k] : σ (l) 6= l = i, i+ 1

for some 1 ≤ i < k.

Example 2.5. If

σ =

(1 2 3 4 5 61 5 3 4 2 6

)and τ =

(1 2 3 4 5 61 2 4 3 5 6

)then σ is a transposition and τ is an adjacent transposition. Here are the pic-torial representation of σ and τ ;

1

2

))

3

4

5

uu

6

1 2 3 4 5 6

1

2

3

4

5

6

1 2 3 4 5 6

In terms of these pictures it is easy to recognize transpositions and adjacenttranspositions.

Page: 10 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 17: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

3

Integration Theory Outline

In this course we are going to be considering integrals over open subsetsof Rd and more generally over “manifolds.” As the prerequisites for this classdo not include real analysis, I will begin by summarizing a reasonable workingknowledge of integration theory over Rd. We will thus be neglecting some tech-nical details involving measures and σ – algebras. The knowledgeable readershould be able to fill in the missing hypothesis while the less knowledgeablereaders should not be too harmed by the omissions to follow.

Definition 3.1. The indicator function of a subset, A ⊂ Rd, is defined by

1A (x) :=

1 if x ∈ A0 if x /∈ A.

Remark 3.2 (Optional). Every function, f : Rd → R, may be approximated bya linear combination of indicator functions as follows. If ε > 0 is given we let

fε :=∑n∈N

nε · 1nε≤f<(n+1)ε, (3.1)

where nε ≤ f < (n+ 1) ε is shorthand for the set,x ∈ Rd : nε ≤ f (x) < (n+ 1) ε

.

We now summarize “modern” Lebesgue integration theory over Rd.

1. For each d, there is a uniquely determined volume measure, md on all1

subsets of Rd (subsets of Rd ) with the following properties;

a) md (A) ∈ [0,∞] for all A ⊂ Rd with md (∅) = 0.b) md (A ∪B) = md (A)+md (B) is A∩B = ∅. More generally, if An ⊂ Rd

for all n with An ∩Am = ∅ for m 6= n we have

md (∪∞n=1An) =

∞∑n=1

md (An) .

c) md (x+A) = md (A) for all A ⊂ Rd and x ∈ Rd, where

x+A :=x+ y ∈ Rd : y ∈ A

.

1 This is a lie! Nevertheless, for our purposes it will be reasonably safe to ignore thislie.

d) md

([0, 1]

d)

= 1.

[The reader is supposed to view md (A) as the d-dimensional volume ofa subset, A ⊂ Rd.]

2. Associated to this volume measure is an integral which takes (not all) func-tions, f : Rd → R, and assigns to them a number denoted by∫

Rdfdmd =

∫Rdf (x) dmd (x) ∈ R.

This integral has the following properties;

a) When d = 1 and f is continuous function with compact support,∫R fdm1 is the ordinary integral you studied in your first few calcu-

lus courses.b) The integral is defined for “all” f ≥ 0 and in this case∫

Rdfdmd ∈ [0,∞] and

∫Rd

1Admd = md (A) for all A ⊂ Rd.

c) The integral is “positive” linear, i.e. if f, g ≥ 0 and c ∈ [0,∞), then∫Rd

(f + cg) dmd =

∫Rdfdmd + c

∫Rdgdmd. (3.2)

d) The integral is monotonic, i.e. if 0 ≤ f ≤ g, then∫Rdfdmd ≤

∫Rdgdmd. (3.3)

e) Let L1 (md) denote those functions f : Rd → R such that∫Rd |f | dmd <

∞. Then for f ∈ L1 (md) we define∫Rdfdmd :=

∫Rdf+dmd −

∫Rdf−dmd

where

f± (x) = max (±f (x) , 0) and so that f (x) = f+ (x)− f− (x) .

Page 18: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

12 3 Integration Theory Outline

f) The integral, L1 (md) 3 f →∫Rd fdmd is linear, i.e. Eq. (3.2) holds for

all f, g ∈ L1 (md) and c ∈ R.g) If f, g ∈ L1 (md) and f ≤ g then Eq. (3.3) still holds.

3. The integral enjoys the following continuity properties.

a) MCT: the monotone convergence theorem holds; if 0 ≤ fn ↑ fthen

↑ limn→∞

∫Rdfndmd =

∫Rdfdmd (with ∞ allowed as a possible value).

Example 1: If An∞n=1 is a sequence of subsets of Rd such that An ↑ A(i.e. An ⊂ An+1 for all n and A = ∪∞n=1An), then

md (An) =

∫Rd

1Andmd ↑∫Rd

1Admd = md (A) as n→∞

Example 2: If gn : Rd → [0,∞] for n ∈ N then∫Rd

∞∑n=1

gn =

∫Rd

limN→∞

N∑n=1

gn = limN→∞

∫Rd

N∑n=1

gn

= limN→∞

N∑n=1

∫Rdgn =

∞∑n=1

∫Rdgn.

b) DCT: the dominated convergence theorem holds, if fn : Rd → Rare functions dominating by a function G ∈ L1 (md) is the sensethat |fn (x)| ≤ G (x) for all x ∈ Rd. Then assuming that f (x) =limn→∞ fn (x) exists for a.e. x ∈ Rd, we may conclude that

limn→∞

∫Rdfndmd =

∫Rd

limn→∞

fndmd =

∫Rdfdmd.

Example: If gn∞n=1 is a sequence of real valued random variablessuch that ∫

Rd

∞∑n=1

|gn| =∞∑n=1

∫Rd|gn| <∞,

then; 1) G :=∑∞n=1 |gn| < ∞ a.e. and hence

∑∞n=1 gn =

limN→∞∑Nn=1 gn exist a.e., 2)

∣∣∣∑Nn=1 gn

∣∣∣ ≤ G and∫Rd G < ∞,

and so 3) by DCT,∫Rd

∞∑n=1

gn =

∫Rd

limN→∞

N∑n=1

gn = limN→∞

∫Rd

N∑n=1

gn

= limN→∞

N∑n=1

∫Rdgn =

∞∑n=1

∫Rdgn.

c) Fatou’s Lemma (*Optional): if 0 ≤ fn ≤ ∞, then∫Rd

[lim infn→∞

fn

]≤ lim inf

n→∞

∫Rdfndmd.

This may be proved as an application of MCT.

4. Tonelli’s theorem; if f : Rd → [0,∞] , then for any i ∈ [d] ,∫Rdfdmd =

∫Rd−1

fdmd−1 where

f (x1, . . . , xi, . . . xd) :=

∫Rf (x1, . . . , xi, . . . xd) dxi.

5. Fubini’s theorem; if f ∈ L1 (md) then the previous formula still hold.6. For our purposes, by repeated use of use of items 4. and 5. we may compute∫

Rd fdmd in terms of iterated integrals in any order we prefer. In more detailif σ ∈ Σd is any permutation of [d] , then∫

Rdfdmd =

∫Rdxσ(1)· · ·

∫Rdxσ(d)f (x1, . . . , xd)

provided either that f ≥ 0 or∫Rdxσ(1)· · ·

∫Rdxσ(d) |f (x1, . . . , xd)| =

∫Rd|f | dmd <∞.

This fact coupled with item 2a. will basically allow us to understand mostintegrals appearing in this course.

Notation 3.3 For A ⊂ Rd, we let∫A

fdmd :=

∫Rd

1Af dmd.

Also when d = 1 and −∞ ≤ s < t ≤ ∞, we write∫ t

s

fdm1 =

∫(s,t)

fdm1 =

∫R

1(s,t)fdm1

and (as usual in Riemann integration theory)∫ s

t

fdm1 := −∫ t

s

fdm1.

Page: 12 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 19: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

3 Integration Theory Outline 13

Example 3.4. Here is a MCT example,∫ ∞−∞

1

1 + t2dt =

∫ ∞−∞

limn→∞

1[−n,n] (t)1

1 + t2dt

MCT= lim

n→∞

∫ ∞−∞

1[−n,n] (t)1

1 + t2dt = lim

n→∞

∫ n

−n

1

1 + t2dt

= limn→∞

[tan−1 (n)− tan−1 (−n)

]=π

2−(−π

2

)= π.

Example 3.5. Similarly for any x > 0,∫ ∞0

e−txdt =

∫ ∞−∞

limn→∞

1[0,n] (t) e−txdtMCT

= limn→∞

∫ ∞−∞

1[0,n] (t) e−txdt

= limn→∞

∫ n

0

e−txdt = limn→∞

−1

xe−tx|nt=0 =

1

x. (3.4)

Example 3.6. Here is a DCT example,

limn→∞

∫ ∞−∞

1

1 + t2sin

(t

n

)dt =

∫ ∞−∞

limn→∞

1

1 + t2sin

(t

n

)dt =

∫R

0dm = 0

since

limn→∞

1

1 + t2sin

(t

n

)= 0 for all t ∈ R

and ∣∣∣∣ 1

1 + t2sin

(t

n

)∣∣∣∣ ≤ 1

1 + t2with

∫R

1

1 + t2dt <∞.

Example 3.7. In this example we will show

limM→∞

∫ M

0

sinx

xdx = π/2 (3.5)

Let us first note that∣∣ sin xx

∣∣ ≤ 1 for all x and hence by DCT,∫ M

0

sinx

xdx = lim

ε↓0

∫ M

ε

sinx

xdx.

Moreover making use of Eq. (3.4), if 0 < ε < M <∞, then by Fubini’s theorem,DCT, and FTC (Fundamental Theorem of Calculus) that

∫ M

ε

sinx

xdx =

∫ M

ε

[limN→∞

∫ N

0

e−tx sinx dt

]dx (DCT)

= limN→∞

∫ M

ε

dx

∫ N

0

dte−tx sinx (DCT)

= limN→∞

∫ N

0

dt

∫ M

ε

dxe−tx sinx (Fubini)

= limN→∞

∫ N

0

dt

[1

1 + t2(− cosx− t sinx) e−tx

]x=M

x=ε

(FTC)

=

∫ ∞0

dt

[1

1 + t2(− cosx− t sinx) e−tx

]x=M

x=ε

. (DCT)

Since [1

1 + t2(− cosx− t sinx) e−tx

]x=M

x=ε

→ 1

1 + t2as M ↑ ∞ and ε ↓ 0,

we may again apply DCT with G (t) = 11+t2 being the dominating function in

order to show∫ M

0

sinx

xdx = lim

ε↓0

∫ M

ε

sinx

xdx = lim

ε↓0

∫ ∞0

dt

[1

1 + t2(− cosx− t sinx) e−tx

]x=M

x=ε

DCT=

∫ ∞0

dt

[1

1 + t2(− cosx− t sinx) e−tx

]x=M

x=0

DCT−→M→∞

∫ ∞0

1

1 + t2dt =

π

2.

Theorem 3.8 (Linear Change of Variables Theorem). If T ∈ GL(d,R) =GL(Rd) – the space of d × d invertible matrices, then the change of variablesformula, ∫

Rdfdmd = |detT |

∫Rdf T dmd, (3.6)

holds for all Riemann integrable functions f : Rd → R.

Proof. From Exercise 3.6 below, we know that Eq. (3.6) is valid whenever Tis an elementary matrix. From the elementary theory of row reduction in linearalgebra, every matrix T ∈ GL(Rd) may be expressed as a finite product of the“elementary matrices”, i.e. T = T1 T2 · · · Tn where the Ti are elementarymatrices. From these assertions we may conclude that∫RdfT dmd =

∫RdfT1T2· · ·Tn dmd =

1

|detTn|

∫RdfT1T2· · ·Tn−1 dmd.

Page: 13 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 20: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

14 3 Integration Theory Outline

Repeating this procedure n− 1 more times (i.e. by induction), we find,∫Rdf T dmd =

1

|detTn| . . . |detT1|

∫Rdf dmd.

Finally we use,

|detTn| . . . |detT1| = |detTn . . . detT1| = |det (T1T2 . . . Tn)| = |detT |

in order to complete the proof.

3.1 Exercises

Exercise 3.1. Find the value of the following integral;

I :=

∫ 9

1

dy

∫ 3

√y

dx xey.

Hint: use Tonelli’s theorem to change the order of integrations.

Exercise 3.2. Write the following iterated integral

I :=

∫ 1

0

dx

∫ 1

y=x2/3

dy xey4

.

as a multiple integral and use this to change the order of integrations and thencompute I.

For the next three exercises let

B (0, r) :=

x ∈ Rd : ‖x‖ =

√√√√ d∑i=1

x2i < r

be the d – dimensional ball of radius r and let

Vd (r) := md (B (0, r)) =

∫Rd

1B(0,r)dmd

be its volume. For example,

V1 (r) = m1 ((−r, r)) =

∫ r

−rdx = 2r.

Exercise 3.3. Suppose that d = 2, show m2 (B (0, r)) = πr2.

Exercise 3.4. Suppose that d = 3, show m3 (B (0, r)) = 4π3 r

3.

Exercise 3.5. Let Vd (r) := md (B (0, r)) . Show for d ≥ 1 that

Vd+1 (r) =

∫ r

−rdz · Vd

(√r2 − z2

)= r

∫ π/2

−π/2Vd (r cos θ) cos θdθ.

Remark 3.9. Using Exercise 3.5 we may deduce again that

V1 (r) = m1 ((−r, r)) = 2r,

V2 (r) = r

∫ π/2

−π/22r cos θ cos θdθ = πr2,

V3 (r) =

∫ r

−rdz · V2

(√r2 − z2

)=

∫ r

−rdz · π

(r2 − z2

)=

3r3.

In principle we may now compute the volume of balls in all dimensions induc-tively this way.

Exercise 3.6 (Change of variables for elementary matrices). Let f :Rd → R be a continuous function with compact support. Show by direct calcu-lation that;

|detT |∫Rdf (T (x)) dx =

∫Rdf (y) dy (3.7)

for each of the following linear transformations;

1. Suppose that i < k and

T (x1, x2 . . . , xd) = (x1, . . . , xi−1, xk, xi+1 . . . , xk−1, xi, xk+1, . . . xd),

i.e. T swaps the i and k coordinates of x. [In matrix notation T is theidentity matrix with the i and k column interchanged.]

2. T (x1, . . . xk, . . . , xd) = (x1, . . . , cxk, . . . xd) where c ∈ R \ 0 . [In matrixnotation, T = [e1| . . . |ek−1|cek|ek+1| . . . |ed] .]

3. T (x1, x2 . . . , xd) = (x1, . . . ,i’th spotxi + cxk, . . . xk, . . . xd) where c ∈ R. [In matrix

notation T = [e1| . . . |ei| . . . |ek + cei|ek+1| . . . |ed].

Hint: you should use Fubini’s theorem along with the one dimensionalchange of variables theorem.

[To be more concrete here are examples of each of the T appearing abovein the special case d = 4,

1. If i = 2 and k = 3 then T =

1 0 0 00 0 1 00 1 0 00 0 0 1

.Page: 14 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 21: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

3.2 *Appendix: Another approach to the linear change of variables theorem 15

2. If k = 3 then T =

1 0 0 00 1 0 00 0 c 00 0 0 1

,3. If i = 2 and k = 4 then

T

x1

x2

x3

x4

=

x1

x2 + cx4

x3

x4

=

1 0 0 00 1 0 c0 0 1 00 0 0 1

x1

x2

x3

x4

while if i = 4 and k = 2,

T

x1

x2

x3

x4

=

x1

x2

x3

x4 + cx2

=

1 0 0 00 1 0 00 0 1 00 c 0 1

x1

x2

x3

x4

.

3.2 *Appendix: Another approach to the linear change ofvariables theorem

Let 〈x, y〉 or x · y denote the standard dot product on Rd, i.e.

〈x, y〉 = x · y =

d∑j=1

xjyj .

Recall that if A is a d× d real matrix then the transpose matrix, Atr, may becharacterized as the unique real d× d matrix such that

〈Ax, y〉 =⟨x,Atry

⟩for all x, y ∈ Rd.

Definition 3.10. A d× d real matrix, S, is orthogonal iff StrS = I or equiva-lently stated Str = S−1.

Here are a few basic facts about orthogonal matrices.

1. A d× d real matrix, S, is orthogonal iff 〈Sx, Sy〉 = 〈x, y〉 for all x, y ∈ Rd.2. If ujdj=1 is any orthonormal basis for Rd and S is the d× d matrix deter-

mined by Sej = uj for 1 ≤ j ≤ d, then S is orthogonal.2 Here is a proof foryour convenience; if x, y ∈ Rd, then

2 This is a standard result from linear algebra often stated as a matrix, S, is orthog-onal iff the columns of S form an orthonormal basis.

⟨x, Stry

⟩= 〈Sx, y〉 =

d∑j=1

〈x, ej〉 〈Sej , y〉 =

d∑j=1

〈x, ej〉 〈uj , y〉

=

d∑j=1

⟨x, S−1uj

⟩〈uj , y〉 =

⟨x, S−1y

⟩from which it follows that Str = S−1.

3. If S is orthogonal, then 1 = det I = det (StrS) = detStr · detS = (detS)2

and hence detS = ±1.

The following lemma is a special case the well known singular value de-composition or SVD for short..

Lemma 3.11 (SVD). If T is a real d × d matrix, then there exists D =diag (λ1, . . . , λd) with λ1 ≥ λ2 ≥ · · · ≥ λd ≥ 0 and two orthogonal matricesR and S such that T = RDS. Further observe that |detT | = detD = λ1 . . . λd.

Proof. Since T trT is symmetric, by the spectral theorem there exists anorthonormal basis ujdj=1 of Rd and λ1 ≥ λ2 ≥ · · · ≥ λd ≥ 0 such that

T trTuj = λ2juj for all j. In particular we have

〈Tuj , Tuk〉 =⟨T trTuj , uk

⟩= λ2

jδjk ∀ 1 ≤ j, k ≤ d.

Case where detT 6= 0. In this case λ1 . . . λd = detT trT = (detT )2> 0

and so λd > 0. It then follows thatvj := 1

λjTuj

dj=1

is an orthonormal basis

for Rd. Let us further let D = diag (λ1, . . . , λd) (i.e. Dej = λjej for 1 ≤ j ≤ d)and R and S be the orthogonal matrices defined by

Rej = vj and Strej = S−1ej = uj for all 1 ≤ j ≤ d.

Combining these definitions with the identity, Tuj = λjvj , implies

TS−1ej = λjRej = Rλjej = RDej for all 1 ≤ j ≤ d,

i.e. TS−1 = RD or equivalently T = RDS.Case where detT = 0. In this case there exists 1 ≤ k < d such that

λ1 ≥ λ2 ≥ · · · ≥ λk > 0 = λk+1 = · · · = λd. The only modification neededfor the above proof is to define vj := 1

λjTuj for j ≤ k and then extend choose

vk+1, . . . , vd ∈ Rd so that vjdj=1 is an orthonormal basis for Rd. We still haveTuj = λjvj for all j and so the proof in the first case goes through withoutchange.

In the next theorem we will make use the characterization of md that it isthe unique measure on

(Rd)

which is translation invariant assigns unit measure

to [0, 1]d.

Page: 15 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 22: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Theorem 3.12. If T is a real d× d matrix, then md T = |detT |md.

Proof. Recall that we know mdT = δ (T )md for some δ (T ) ∈ (0,∞) andso we must show δ (T ) = |detT | . We first consider two special cases.

1. If T = R is orthogonal and B is the unit ball in Rd,3 then δ (R)md (B) =md (RB) = md (B) from which it follows δ (R) = 1 = |detR| .

2. If T = D = diag (λ1, . . . , λd) with λi ≥ 0, then D [0, 1]d

= [0, λ1] × · · · ×[0, λd] so that

δ (D) = δ (D)md

([0, 1]

d)

= md

(D [0, 1]

d)

= λ1 . . . λd = detD.

3. For the general case we use singular value decomposition (Lemma 3.11) towrite T = RDS and then find

δ (T ) = δ (R) δ (D) δ (S) = 1 · detD · 1 = |detT | .

3 B =x ∈ Rd : ‖x‖ < 1

.

Page 23: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part II

Multi-Linear Algebra

Page 24: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 25: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

4

Properties of Volumes

The goal of this short chapter is to show how computing volumes naturallygives rise to the idea of the key objects of this book, namely differential forms,i.e. alternating tensors. The point is that these objects are intimately relatedto computing areas and volumes.

Let Qn := x ∈ Rn : 0 ≤ tj ≤ 1 ∀ j = [0, 1]n

be the unit cube in Rn whichwe I think all agree should have volume equal to 1. For n-vectors, a1, . . . , an ∈Rn, let

P (a1, . . . , an) = [a1| . . . |an]Q =

n∑j=1

tjaj : 0 ≤ tj ≤ 1 ∀ j

be the parallelepiped spanned by (a1, . . . , an) and let

δ (a1, . . . , an) = “signed” Vol (P (v1, . . . , vn)) .

be the signed volume of the parallelepiped. To find the properties of thisvolume, let us fix ain−1

i=1 and consider the function, F (an) = δ (a1, . . . , an) .This is easily computed using the formula of a slant cylinder, see Figure 4.1, as

F (an) = δ (a1, . . . , an) = ± (Area of base ) · n · an (4.1)

where n is a unit vector orthogonal to a1, . . . , an−1 .

Example 4.1. When n = 2, let us first verify Eq. (4.1) in this case by considering

δ (ae1, b) =

∫ b2

0

[slice width]h dh =

∫ b2

0

adh = a (b · e2) .

The sign in Eq. (4.1) is positive if (a1, . . . , an−1,n) is “positively ori-ented,” think of the right hand rule in dimensions 2 and 3. This show an →δ (a1, . . . , an−1, an) is a linear function. A similar argument shows

aj → δ (a1, . . . , aj , . . . , an)

is linear as well. That is δ is a “multi-linear function” of its arguments. Wefurther have that δ (a1, . . . , an) = 0 if ai = aj for any i 6= j as the parallelepipedgenerated by (a1, . . . , an) is degenerate and zero volume. We summarize these

Fig. 4.1. The volume of a slant cylinder is it’s height, n · an.

two properties by saying δ is an alternating multi-linear n-function on Rn.Lastly as P (e1, . . . en) = Q we further have that

δ (e1, . . . , en) = 1. (4.2)

Fact 4.2 We are going to show there is precisely one alternating multi-linearn-function, δ, on Rn such that Eq. (4.2) holds. This function is in fact thefunction you know and the determinant.

Example 4.3 (n = 1 Det). When n = 1 we must have δ ([a]) = ±a, we choose aby convention.

Example 4.4 (n = 2 Det). When n = 2, we find

δ (a, b) = δ (a1e1 + a2e2, b) = a1δ (e1, b) + a2δ (e2, b)

= a1δ (e1, b1e1 + b2e2) + a2δ (e2, b1e1 + b2e2)

= a1b2δ (e1, e2) + a2b1δ (e2, e1) = a1b2 − a2b1

= det [a|b] .

We now proceed to develop the theory of alternating multilinear functionsin general.

Page 26: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 27: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

5

Multi-linear Functions (Tensors)

For the rest of these notes, V will denote a real vector space. Typically wewill assume that n = dimV <∞.

Example 5.1. V = Rn, subspaces of Rn, polynomials of degree < n. The mostgeneral overarching vector space is typically

V = F (X,R) = all functions from X to R .

An interesting subspace is the space of finitely supported functions,

Ff (X,R) = f ∈ F (X,R) : # (f 6= 0) <∞ ,

wheref 6= 0 = x ∈ X : f (x) 6= 0 .

5.1 Basis and Dual Basis

Definition 5.2. Let V ∗ denote the dual space of V, i.e. the vector space of alllinear functions, ` : V → R.

Example 5.3. Here are some examples;

1. If V = Rn, then ` (v) = w · v = wtrv for w ∈ V is in V ∗.2. V = polynomials of deg < n is a vector space and `0 (p) = p (0) or ` (p) =∫ 1

−1p (x) dx given ` ∈ V ∗.

3. For ajpj=1 ⊂ R and xjpj=1 ⊂ X, let ` (f) =∑pj=1 ajf (xj) , then ` ∈

F (X,R)∗.

Notation 5.4 Let β := ejnj=1 be a basis for V and β∗ := εjnj=1 be its dualbasis, i.e.

εj

(n∑i=1

aiei

):= aj for all j.

The book denotes εj as e∗j . In case, V = Rn and ejnj=1 is the standard basis,we will later write dxj for εj = e∗j .

Example 5.5. If V = Rn and β = eini=1 is the standard basis for Rn, thenεi (v) = ei · v = etr

i v for 1 ≤ i ≤ n is the dual basis to β.

Example 5.6. If V denotes polynomials of degree < n, with basis ej (x) = xj for0 ≤ j < n, then εj (p) := 1

j!p(j) (0) is the associated dual basis.

Example 5.7. For x ∈ X, let δx ∈ Ff (X,R) be defined by

δx (y) = 1x (y) =

1 if y = x0 if y 6= x

.

One may easily show that δxx∈X is a basis for Ff (X,R) and for f ∈Ff (X,R) ,

f =∑

x:f(x) 6=0

f (x) δx.

The dual basis ideas are complicated in this case when X is an infinite setas Vaki mentioned in section. We will not consider such “infinite dimensional”problems in these notes.

Proposition 5.8. Continuing the notation above, then

v =

n∑j=1

εj (v) ej for all v ∈ V, and (5.1)

` =

n∑j=1

` (ej) εj for all ` ∈ V ∗. (5.2)

Moreover, β∗, is indeed a basis for V ∗.

Proof. Because ej is a basis, we know that v =∑nj=1 ajej . Applying εk

to this formula shows

εk (v) =

n∑j=1

ajεk (ej) = ak

and hence Eq. (5.1) holds. Now apply ` to Eq. (5.1) to find,

Page 28: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

22 5 Multi-linear Functions (Tensors)

` (v) =

n∑j=1

εj (v) ` (ej) =

n∑j=1

` (ej) εj (v) =

n∑j=1

` (ej) εj

(v)

which proves Eq. (5.2). From Eq. (5.2) we know that εjnj=1 spans V ∗. More-over if

0 =

n∑j=1

ajεj =⇒ 0 = 0 (ek) =

n∑j=1

ajεj (ek) = ak

which shows εjnj=1 is linearly independent.

Exercise 5.1. Let V = Rn and β = ujnj=1 be a basis for Rn. Recall that

every ` ∈ (Rn)∗

is of the form `a (x) = a · x for some a ∈ Rn. Thus the dualbasis, β∗, to β can be written as

u∗j = `aj

nj=1

for some ajnj=1 ⊂ Rn. In this

problem you are asked to show how to find the ajnj=1 by the following steps.

1. Show that for j ∈ [n] , aj must solve the following k-linear equations;

δj,k = `aj (uk) = aj · uk = utrk aj for k ∈ [n] . (5.3)

2. Let U := [u1| . . . |un] (i.e. the columns of U are the vectors from β). Showthat the equations in (5.3) may be written in matrix form as, U traj = ej ,where ejnj=1 is the standard basis for Rn.

3. Conclude that aj = [U tr]−1ej or equivalently;

[a1| . . . |an] =[U tr]−1

Exercise 5.2. Let V = R2 and β = u1, u2 , where

u1 =

[13

]and u2 =

[−11

].

Find a1, a2 ∈ R2 explicitly so that explicitly the dual basis β∗ :=u∗1 = `a1 , u

∗2 = `a2 is the dual basis to β. Please explicitly verify your

answer is correct by showing u∗j (uk) = δjk.

Exercise 5.3. Let V = Rn, ajkj=1 ⊂ V, and `j (x) = aj · x for x ∈ Rn

and j ∈ [k] . Show `jkj=1 ⊂ V ∗ is a linearly independent set if and only if

ajkj=1 ⊂ V is a linearly independent set.

Exercise 5.4. Let V = Rn, ajkj=1 ⊂ V, and `j (x) = aj · x for x ∈ Rn

and j ∈ [k] . If `jkj=1 ⊂ V ∗ is a linearly independent set, show there exists

ujkj=1 ⊂ V so that `i (uj) = δij for i, j ∈ [k] . Here is a possible outline.

1. Using Exercise 5.3 and citing a basic fact from Linear algebra, you maychoose ajnj=k+1 ⊂ V so that ajnj=1 is a basis for V.

2. Argue that it suffices to find uj ∈ V so that

ai · uj = δij for all i, j ∈ [n] . (5.4)

3. Let ejnj=1 be the standard basis for Rn and A := [a1| . . . |an] be the n×nmatrix with columns given by that ajnj=1 . Show that the Eqs. (5.4) maybe written as

Atruj = ej for j ∈ [n] . (5.5)

4. Cite basic facts from linear algebra to explain why A := [a1| . . . |an] and Atr

are both invertible n× n matrices.5. Argue that Eq. (5.5) has a unique solution, uj ∈ Rn, for each j.

5.2 Multi-linear Forms

Definition 5.9. A function T : V k → R is multi-linear(k-linear to be precise) if for each 1 ≤ i ≤ k, the map

V 3 vi → T (v1, . . . , vi, . . . vk) ∈ R

is linear. We denote the space of k-linear maps by Lk (V ) and element of thisspace is a k-tensor on (in) V.

Lemma 5.10. Note that Lk (V ) is a vector subspace of all functions fromV k → R.

Example 5.11. If `1, . . . , `k ∈ V ∗, we let `1 ⊗ · · · ⊗ `k ∈ Lk (V ) be defined

(`1 ⊗ · · · ⊗ `k) (v1, . . . , vk) =

k∏j=1

`j (vj)

for all (v1, . . . , vk) ∈ V k.

Exercise 5.5. In this problem, let

v =

v1

v2

v3

and w =

w1

w2

w3

.Which of the following functions formulas for T define a 2-tensors on R3. Pleasejustify your answers.

Page: 22 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 29: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

5.2 Multi-linear Forms 23

1. T (v, w) = v1w3 + v1w2 + v2w1 + 7v1w1.2. T (v, w) = v1 + 7v1 + v2.3. T (v, w) = v2

1w3 + v2w1,4. T (v, w) = sin (v1w3 + v1w2 + v2w1 + 7v1w1) .

Theorem 5.12. If ejnj=1 is a basis for V, then εj1 ⊗ · · · ⊗ εjk : ji ∈ [n] is

a basis for Lk (V ) and moreover if T ∈ Lk (V ), then

T =∑

j1,...,jk∈[n]

T (ej1 , . . . , ejk) · εj1 ⊗ · · · ⊗ εjk (5.6)

and this decomposition is unique. [One might identify 2-tensors with matricesvia T → Aij := T (ei, ej) .]

Proof. Given v1, . . . , vk ∈ V, we know that

vi =

n∑ji=1

εji (vi) eji

and hence

T (v1, . . . , vk) = T

n∑j1=1

εj1 (v1) ej1 , . . . ,

n∑jk=1

εjk (vk) ejk

=

n∑j1=1

· · ·n∑

jk=1

T (εj1 (v1) ej1 , . . . , εjk (vk) ejk)

=∑

j1,...,jk∈[n]

T (ej1 , . . . , ejk) εj1 (v1) . . . εjk (vk)

=∑

j1,...,jk∈[n]

T (ej1 , . . . , ejk) εj1 ⊗ · · · ⊗ εjk (v1, . . . , vk) .

This verifies that Eq. (5.6) holds and also that

εj1 ⊗ · · · ⊗ εjk : ji ∈ [n] spans Lk (V ) .

For linearly independence, if aj1,...,jk ⊂ R are such that

0 =∑

j1,...,jk∈[n]

aj1,...,jk · εj1 ⊗ · · · ⊗ εjk ,

then evaluating this expression at (ei1 , . . . eik) shows

0 = 0 (ei1 , . . . eik) =∑

j1,...,jk∈[n]

aj1,...,jk · εj1 ⊗ · · · ⊗ εjk (ei1 , . . . eik)

=∑

j1,...,jk∈[n]

aj1,...,jk · εj1 (ei1) . . . εjk (eik)

=∑

j1,...,jk∈[n]

aj1,...,jk · δj1,i1 . . . δjk,ik = ai1...,ik

which shows ai1...,ik = 0 for all indices and completes the proof.

Corollary 5.13. dimLk (V ) = nk.

Definition 5.14. If S ∈ Lp (V ) and T ∈ Lq (V ) , then we define S ⊗ T ∈Lp+q (V ) by,

S ⊗ T (v1, . . . , vp, w1, . . . , wq) = S (v1, . . . , vp)T (w1, . . . , wq) .

Definition 5.15. If A : V → W is a linear transformation, and T ∈ Lk (W ),then we define the pull back A∗T ∈ Lk (V ) by

(A∗T ) (v1, . . . , vk) = A (Tv1, . . . , T vk) .

V × · · · × V −→ W × · · · ×W −→ R(v1, . . . , vk) −→ (Av1, . . . , Avk) −→ T (Av1, . . . , Avk) .

It is called pull back since A∗ : Lk (W ) → Lk (V ) maps the opposite directionof A.

Remark 5.16. As shown in the book the tensor product satisfies

(R⊗ S)⊗ T = R⊗ (S ⊗ T ) ,

T ⊗ (S1 + S2) = T ⊗ S1 + T ⊗ S2,

(S1 + S2)⊗ T = S1 ⊗ T + S2 ⊗ T,....

Remark 5.17. The definition of T1 ⊗ T2 and the associated “tensor algebra.”[Typically the tensor symbol, ⊗, in mathematics is used to denote the product oftwo functions which have distinct arguments. Thus if f : X → R and g : Y → Rare two functions on the sets X and Y respectively, then f ⊗ g : X × Y → R isdefined by

(f ⊗ g) (x, y) = f (x) g (y) .

In contrast, if Y = X we may also define the more familiar product, f ·g : X →R, by

(f · g) (x) = f (x) g (x) .

Incidentally, the relationship between these two products is

(f · g) (x) = (f ⊗ g) (x, x) .

Page: 23 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 30: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Lemma 5.18. The product, ⊗, defined in the previous remark is associativeand distributive over addition. We also have for λ ∈ R, that

(λf)⊗ g = f ⊗ (λg) = λ · f ⊗ g. (5.7)

That is ⊗ satisfies the rules we expect of a “product,” i.e. plays nicely with thevector space operations.

Proof. If h : Z → R is another function, then

((f ⊗ g)⊗ h) (x, y, z) = (f ⊗ g) (x, y) · h (z) = (f (x) g (y))h (z)

= f (x) (g (y)h (z)) = (f ⊗ (g ⊗ h)) (x, y, z) .

This shows in general that (f ⊗ g)⊗ h = f ⊗ (g ⊗ h) , i.e. ⊗ is associative.Similarly if Z = Y, then

(f ⊗ (g + h)) (x, y) = f (x) · (g + h) (y) = f (x) · (g (y) + h (y))

= f (x) · g (y) + f (x) · h (y)

= (f ⊗ g) (x, y) + (f ⊗ h) (x, y)

= (f ⊗ g + f ⊗ h) (x, y)

from which we conclude that

f ⊗ (g + h) = f ⊗ g + f ⊗ h

Similarly one shows (f + h) ⊗ g = f ⊗ g + h ⊗ g when Z = X. These are thedistributive rules. The easy proof of Eq. (5.7) is left to the reader.

Page 31: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

6

Alternating Multi-linear Functions

Definition 6.1. T ∈ Lk (V ) is said to be alternating if T (v1, . . . , vk) =−T (w1, . . . , wk) whenever (w1, . . . , wk) is the list (v1, . . . , vk) with any two en-tries interchanged. We denote the subspace1 of alternating functions by Ak (V )or by Λk (V ∗) with the convention that A0 (V ) = Λ0 (V ∗) = R. An element,T ∈ Ak (V ) = Λk (V ∗) will be called a k-form.

Remark 6.2. If f (v, w) is a multi-linear function such that f (v, v) = 0 then forall v, w ∈ V, then

0 = f (v + w, v + w) = f (v, v) + f (w,w) + f (v, w) + f (w, v)

= f (w, v) + f (v, w) =⇒ f (v, w) = −f (w, v) .

Conversely, if f (v, w) = −f (w, v) for all v, and w, then f (v, v) = −f (v, v)which shows f (v, v) = 0.

Lemma 6.3. If T ∈ Lk (V ) , then the following are equivalent;

1. T is alternating, i.e. T ∈ Λk (V ∗) .2. T (v1, . . . , vk) = 0 whenever any two distinct entries are equal.3. T (v1, . . . , vk) = 0 whenever any two consecutive entries are equal.

Proof. 1. =⇒ 2. If vi = vj for some i < j and T ∈ Λk (V ∗) , then byinterchanging the i and j entries we learn that T (v1, . . . , vk) = −T (v1, . . . , vk)which implies T (v1, . . . , vk) = 0.

2. =⇒ 3. This is obvious.3. =⇒ 1. Applying Remark 6.2 with

f (v, w) := T (v1, . . . , vj−1, v, w, vj+2, . . . , vk)

shows that T (v1, . . . , vk) = −T (w1, . . . , wk) if (w1, . . . , wk) is the list(v1, . . . , vk) with the j and j + 1 entries interchanged. If (w1, . . . , wk) is thelist (v1, . . . , vk) with the i < j entries interchanged, then (w1, . . . , wk) can betransformed back to (v1, . . . , vk) by an odd number of nearest neighbor inter-changes and therefore it follows by what we just proved that

T (v1, . . . , vk) = −T (w1, . . . , wk) .

1 The alternating conditions are linear equations that T ∈ Lk (V ) must satisfy andhence Ak (V ) is a subspace of Lk (V ) .

For example, to transform

(v1, v5, v3, v4, v2, v6) back to (v1, v2, v3, v4, v5, v6) ,

we transpose v5 with its nearest neighbor to the right 2 times to arrive at thelist (v1, v3, v4, v5, v2, v6) . We then we transpose v2 with its nearest neighbor tothe left 3 times to arrive (after a sum total of 5 adjacent transpositions) backto the list (v1, v2, v3, v4, v5, v6) . For the general i < j the number of adjacenttransposition needed needed is 2 (j − i)− 1 which is always odd.

Exercise 6.1. If T ∈ Λk (V ∗) , show T (v1, . . . , vk) = 0 whenever viki=1 ⊂ Vare linearly dependent.

A simple consequence of this exercise is the following basic lemma.

Lemma 6.4. If T ∈ Λk (V ∗) with k > dimV, then T ≡ 0, i.e. Λk (V ∗) = 0for all k > dimV.

At this point we have not given any non-zero examples of alternating forms.The next definition and proposition gives a mechanism for constructing many(in fact a full basis of) alternating forms.

Definition 6.5. For ` ∈ V ∗ and ϕ ∈ Λk (V ∗) , let L`ϕ be the multi-linear k+ 1– form on V defined by

(L`ϕ) (v0, . . . , vk) =

k∑i=0

(−1)i` (vi)ϕ (v0, . . . , vi, . . . , vk) .

for all (v0, . . . , vk) ∈ V k+1.

Proposition 6.6. If ` ∈ V ∗ and ϕ ∈ Λk (V ∗) , then (L`ϕ) ∈ Λk+1 (V ∗) .

Proof. We must show L`ϕ is alternating. According to Lemma 6.3, it sufficesto show (L`ϕ) (v0, . . . , vk) = 0 whenever vj = vj+1 for some 0 ≤ j < k. Sosuppose that vj = vj+1, then since ϕ is alternating

Page 32: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

26 6 Alternating Multi-linear Functions

(L`ϕ) (v0, . . . , vk) =

k∑i=0

(−1)i` (vi)ϕ (v0, . . . , vi, . . . , vk)

=

j+1∑i=j

(−1)i` (vi)ϕ (v0, . . . , vi, . . . , vk)

=[(−1)

j+ (−1)

j+1]` (vj)ϕ (v0, . . . , vj , . . . , vk) = 0.

Proposition 6.7. Let eini=1 be a basis for V and εini=1 be its dual basis forV ∗. Then

ϕj := LεjLεj+1. . . Lεn−1

εn ∈ Λn−j+1 (V ∗) \ 0for all j ∈ [n] and in particular, dimΛk (V ∗) ≥ 1 for all 0 ≤ k ≤ n. [We willsee in Theorem 6.33 below that dimΛk (V ∗) =

(nk

)for all 0 ≤ k ≤ n.]

Proof. We will show that ϕj is not zero by showing that

ϕj (ej , . . . , en) = 1 for all j ∈ [n] .

This is easily proved by (reverse induction) on j. Indeed, for j = n we haveϕn (en) = εn (en) = 1 and for 1 ≤ j < n we have ϕj := Lεjϕj+1 so that

ϕj (ej , . . . , en) =

n∑k=j

(−1)k−j

εj (ek)ϕj+1 (ej , . . . , ek, . . . , en)

= ϕj+1 (ej , ej+1, . . . , en) = ϕj+1 (ej+1, . . . , en) = 1

wherein we used the induction hypothesis for the last equality. This completesthe proof for j ∈ [n] . Finally for k = 0, we have Λ0 (V ∗) = R by conventionand hence dimΛ0 (V ∗) = 1.

Notation 6.8 Fix a basis eini=1 of V with dual basis, εini=1 ⊂ V ∗, and thenlet

ϕ = ϕ1 = Lε1Lε2 . . . Lεn−1εn. (6.1)

Proposition 6.9. When V = Rn and ejnj=1 is the standard basis for V, then

ϕ (a1, . . . , an) = det [a1| . . . |an] ∀ aini=1 ⊂ Rn. (6.2)

Proof. Let us note that if

ϕ (a1, . . . , cai, . . . , an) = cϕ (a1, . . . , an) and

ϕ (a1, . . . , ai, . . . , aj + cai, . . . , an)

= ϕ (a1, . . . , ai, . . . , aj , . . . , an) + cϕ (a1, . . . , ai, . . . , ai, . . . , an)

= ϕ (a1, . . . , an) + c · 0 = ϕ (a1, . . . , an) .

Thus both ϕ and det behave the same way under column operations and agreewith ai = ei which already shows Eq. (6.2) holds when aini=1 are linearlyindependent. As both sides of Eq. (6.2) are zero when aini=1 are linearlydependent, the proof is complete.

Definition 6.10 (Signature of σ). For σ ∈ Σn, let

(−1)σ

:= ϕ (eσ1, . . . , eσn) ,

where ϕ is as in Notation 6.8. We call (−1)σ

the sign of the permutation,σ.

Lemma 6.11. If σ ∈ Σn, then (−1)σ

may be computed as (−1)N

where N isthe number of transpositions2 needed to bring (σ1, . . . , σn) back to (1, 2, . . . , n)and so (−1)

σdoes not depend on the choices made in defining (−1)

σ. Moreover,

if vjnj=1 ⊂ V, then

ϕ (vσ1, . . . , vσn) = (−1)σϕ (v1, . . . , vd) ∀ σ ∈ Σn.

Proof. Straightforward and left to the reader.

Corollary 6.12. If σ ∈ Σn is a transposition, then (−1)σ

= −1.

Proof. This has already been proved in the course of proving Lemma 6.3.

Lemma 6.13. If σ, τ ∈ Σn, then (−1)στ

= (−1)σ

(−1)τ

and in particular it

follows that (−1)σ−1

= (−1)σ.

Proof. Let vj := eσj for each j, then

(−1)στ

:= ϕ (eστ1, . . . , eστn) = ϕ (vτ1, . . . , vτn)

= (−1)τϕ (v1, . . . , vd) = (−1)

τϕ (eσ1, . . . , eσd)

= (−1)τ

(−1)σ.

Lemma 6.14. A multi-linear map, T ∈ Lk (V ) , is alternating (i.e. T ∈Ak (V ) = Λk (V ∗)) iff

T (vσ1, . . . , vσk) = (−1)σT (v1, . . . , vk) for all σ ∈ Σk.

2 N is not unique but (−1)N = (−1)σ is unique.

Page: 26 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 33: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

6.1 Structure of Λn (V ∗) and Determinants 27

Proof. (−1)σ

= (−1)N

where N is the number of transpositions need totransform σ to the identity permutation. For each of these transpositions pro-duce an interchange of entries of the T function and hence introduce a (−1)factor. Thus in total,

T (vσ1, . . . , vσk) = (−1)NT (v1, . . . , vk) = (−1)

σT (v1, . . . , vk) .

The converse direction follows from the simple fact that the sign of a transpo-sition is −1.

Notation 6.15 (Pull Backs) Let V and W be finite dimensional vectorspaces. To each linear transformation, T : V → W, there is linear transfor-mation, T ∗ : Λk (W ∗)→ Λk (V ∗) defined by

(T ∗ϕ) (v1, . . . , vk) := ϕ (Tv1, . . . , T vk)

for all ϕ ∈ Λk (W ∗) and (v1, . . . , vk) ∈ V k. [We leave to the reader the easyproof that T ∗ϕ is indeed in Λk (V ∗) .]

Exercise 6.2. Let V,W, and Z be three finite dimensional vector spaces and

suppose that VT→ W

S→ Z are linear transformations. Noting that VST→ Z,

show (ST )∗

= T ∗S∗.

6.1 Structure of Λn (V ∗) and Determinants

In what follows we will continue to use the notation introduced in Notation 6.8.

Proposition 6.16 (Structure of Λn (V ∗)). If ψ ∈ Λn (V ∗) , then ψ =ψ (e1, . . . , en)ϕ and in particular, dimΛn (V ∗) = 1. Moreover for any vjnj=1 ⊂V,

ϕ (v1, . . . , vn) =∑σ∈Σn

(−1)σεσ1 (v1) . . . εσn (vn)

=∑σ∈Σn

(−1)σε1 (vσ1) . . . εn (vσn) .

The first equality may be rewritten as

ϕ =∑σ∈Σn

(−1)σεσ1 ⊗ · · · ⊗ εσn.

Proof. Let vjnj=1 ⊂ V and recall that

vj =

n∑kj=1

εkj (vj) ekj .

Using the fact that ψ is multi-linear and alternating we find,

ψ (v1, . . . , vn) =

n∑k1,...,kn=1

n∏j=1

εkj (vj)

ψ (ek1 , . . . , ekn)

=∑σ∈Σn

n∏j=1

εσj (vj)

ψ (eσ1 , . . . , eσn)

=∑σ∈Σn

n∏j=1

εσj (vj)

(−1)σψ (e1, . . . , en)

while the same computation shows

ϕ (v1, . . . , vn) =∑σ∈Σn

n∏j=1

εσj (vj)

(−1)σϕ (e1, . . . , en)

=∑σ∈Σn

(−1)σεσ1 (v1) . . . εσn (vn) .

Lastly let us note that

n∏j=1

εσj (vj) =

n∏j=1

εσσ−1j

(vσ−1j

)=

n∏j=1

εj(vσ−1j

)so that

ϕ (v1, . . . , vn) =∑σ∈Σn

n∏j=1

εj(vσ−1j

)(−1)

σ

=∑σ∈Σn

n∏j=1

εj(vσ−1j

)(−1)

σ−1

=∑σ∈Σn

n∏j=1

εj (vσj) (−1)σ

wherein we have used Σn 3 σ → σ−1 ∈ Σn is a bijection for the last equality.

Exercise 6.3. If ψ ∈ Λn (V ∗) \ 0 , show ψ (v1, . . . , vn) 6= 0 whenevervini=1 ⊂ V are linearly independent. [Coupled with Exercise 6.1, it followsthat ψ (v1, . . . , vn) 6= 0 iff vini=1 ⊂ V are linearly independent.]

Definition 6.17. Suppose that T : V → V is a linear map between a finitedimensional vector space, then we define detT ∈ R by the relationship, T ∗ψ =detT ·ψ where ψ is any non-zero element in Λn (V ∗) . [The reader should verifythat detT is independent of the choice of ψ ∈ Λn (V ∗) \ 0 .]

Page: 27 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 34: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

28 6 Alternating Multi-linear Functions

The next lemma gives a slight variant of the definition of the determinant.

Lemma 6.18. If ψ ∈ Λn (V ∗) \ 0 , ejnj=1 is a basis for V, and T : V → Vis a linear transformation, then

detT =ψ (Te1, . . . , T en)

ψ (e1, . . . , en). (6.3)

Proof. Evaluation the identity, detT · ψ = T ∗ψ, at (e1, . . . , en) shows

detT · ψ (e1, . . . , en) = (T ∗ψ) (e1, . . . , en) = ψ (Te1, . . . , T en)

from which the lemma directly follows.

Corollary 6.19. Let T be as in Definition 6.17 and suppose ejnj=1 is a basis

for V and εjnj=1 is its dual basis, then

detT =∑σ∈Σn

(−1)σε1 (Teσ1) . . . εn (Teσn)

=∑σ∈Σn

(−1)σεσ1 (Te1) . . . εσn (Ten) .

Proof. We take ϕ ∈ Λn (V ∗) so that ϕ (e1, . . . , en) = 1. Since T ∗ϕ ∈ Λn (V ∗)we have seen that T ∗ϕ = λϕ where

λ = (T ∗ϕ) (e1, . . . , en) = ϕ (Te1, . . . , T en)

=∑σ∈Σn

(−1)σεσ1 (Te1) . . . εσn (Ten)

=∑σ∈Σn

(−1)σε1 (Teσ1) . . . εn (Teσn) .

Corollary 6.20. Suppose that S, T : V → V are linear maps between a finitedimensional vector space, V, then

det (ST ) = det (S) · det (T ) .

Proof. On one hand

(ST )∗ϕ = det (ST )ϕ.

On the other using Exercise 6.2 we have

(ST )∗ϕ = T ∗ (S∗ϕ) = T ∗ (detS · ϕ) = detS · T ∗ (ϕ) = detS · detT · ϕ.

Comparing the last two equations completes the proof.

6.2 Determinants of Matrices

In this section we will restrict our attention to linear transformations on V = Rnwhich we identify with n×n matrices. Also, for the purposes of this section letejnj=1 be the standard basis for Rn. Finally recall that the ith column of A isvi = Aei and so we may express A as

A = [v1| . . . |vn] = [Ae1| . . . |Aen] .

Proposition 6.21. The function, A→ det (A) is the unique alternating multi-linear function of the columns of A such that det (I) = det [e1| . . . |en] = 1.

Proof. Let ψ ∈ An (Rn) \ 0 . Then by Lemma 6.18,

detA =ψ (Ae1, . . . , Aen)

ψ (e1, . . . , en)

which shows that detA is and alternating multi-linear function of the columnsof A. We have already seen in Proposition 6.16 that there is only one suchfunction.

Theorem 6.22. If A is a n×n matrix which we view as a linear transformationon Rn, then;

1. detA =∑σ∈Σn (−1)

σaσ1,1 . . . aσn,n,

2. detA =∑σ∈Σn (−1)

σa1,σ1 . . . an,σn, and

3. detA = detAtr.4. The map A → detA is the unique alternating multilinear function of the

rows of A such that det I = 1.

Proof. We take eini=1 to be the standard basis for Rn and εini=1 be itsdual basis. Then by Corollary 6.19,

detA =∑σ∈Σn

(−1)σε1 (Aeσ1) . . . εn (Aeσn)

=∑σ∈Σn

(−1)σεσ1 (Ae1) . . . εσn (Aen)

which completes the proof of item 1. and 2. since εi (Aej) = ai,j . For item 3 weuse item 1. with A replaced by Atr to find,

detAtr =∑σ∈Σn

(−1)σ (Atr)σ1,1

. . .(Atr)σn,n

=∑σ∈Σn

(−1)σa1,σ1 . . . an,σn.

This completes the proof item 3. since the latter expression is equality to detAby item 2. Finally item 4. follows from item 3. and Proposition 6.21.

Page: 28 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 35: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

6.3 The structure of Λk (V ∗) 29

Proposition 6.23. Suppose that n = n1 + n2 with ni ∈ N and T is a n × nmatrix which has the block form,

T =

[A B

0n2×n1C

],

where A is a n1 × n1 – matrix, C is a n2 × n2 – matrix and B is a n1 × n2 –matrix. Then

detT = detA · detC.

Proof. Fix B and C and consider δ (A) := det

[A B

0n2×n1C

]. Then δ ∈

Λn1 (Rn1) and hence

δ (A) = δ (I) · det (A) = det (A) · det

[I B

0n2×n1 C

].

By doing standard column operations it follows that

det

[I B

0n2×n1C

]= det

[I 0n1×n2

0n2×n1C

]=: δ (C) .

Working as we did with δ we conclude that δ (C) = det [C] · δ (I) = detC.Putting this all together completes the proof.

Next we want to prove the standard cofactor expansion of detA.

Notation 6.24 If A is a n× n matrix and 1 ≤ i, j ≤ n, let A (i, j) denotes Awith its ith row and jth – column being deleted.

Proposition 6.25 (Co-factor Expansion). If A is a n × n matrix and 1 ≤j ≤ n, then

det (A) =

n∑i=1

(−1)i+j

aij det [A (i, j)] (6.4)

and similarly if 1 ≤ i ≤ n, then

det (A) =

n∑j=1

(−1)i+j

aij det [A (i, j)] . (6.5)

We refer to Eq. (6.4) as the cofactor expansion along the jth – columnand Eq. (6.5) as the cofactor expansion along the ith – row.

Proof. Equation (6.5) follows from Eq. (6.4) with that aid of item 3. ofTheorem 6.22. To prove Eq. (6.4), let A = [v1| . . . |vn] and for b ∈ Rn letb(i) := b− biei and then write vj =

∑ni=1 aijei. We then find,

detA =

n∑i=1

aij det [v1| . . . |vj−1|ei|vj+1| . . . |vn]

=

n∑i=1

aij det[v

(i)1 | . . . |v

(i)j−1|ei|v

(i)j+1| . . . |v

(i)n

]=

n∑i=1

aij (−1)j−1

det[ei|v(i)

1 | . . . |v(i)j−1|v

(i)j+1| . . . |v

(i)n

]=

n∑i=1

aij (−1)j−1

(−1)i−1

det

[1 00 A (i, j)

]

=

n∑i=1

(−1)i+j

aij det [A (i, j)]

wherein we have used the determinant changes sign any time one interchangestwo columns or two rows.

Example 6.26. Let us illustrate the above proof in the 3× 3 case by expandingalong the second column. To shorten the notation we we write detA = |A| ;∣∣∣∣∣∣

a11 a12 a13

a21 a22 a23

a31 a32 a33

∣∣∣∣∣∣ = a12

∣∣∣∣∣∣a11 1 a13

a21 0 a23

a31 0 a33

∣∣∣∣∣∣+ a12

∣∣∣∣∣∣a11 0 a13

a21 1 a23

a31 0 a33

∣∣∣∣∣∣+ a12

∣∣∣∣∣∣a11 0 a13

a21 0 a23

a31 1 a33

∣∣∣∣∣∣where∣∣∣∣∣∣

a11 1 a13

a21 0 a23

a31 0 a33

∣∣∣∣∣∣ =

∣∣∣∣∣∣0 1 0a21 0 a23

a31 0 a33

∣∣∣∣∣∣ = −

∣∣∣∣∣∣1 0 00 a21 a23

0 a31 a33

∣∣∣∣∣∣ = −∣∣∣∣a21 a23

a31 a33

∣∣∣∣ = −detA (1, 2) ,

∣∣∣∣∣∣a11 0 a13

a21 1 a23

a31 0 a33

∣∣∣∣∣∣ =

∣∣∣∣∣∣a11 0 a13

0 1 0a31 0 a33

∣∣∣∣∣∣ = −

∣∣∣∣∣∣0 1 0a11 0 a13

a31 0 a33

∣∣∣∣∣∣ =

∣∣∣∣∣∣1 0 00 a11 a13

0 a31 a33

∣∣∣∣∣∣ = det [A (2, 2)] ,

and∣∣∣∣∣∣a11 0 a13

a21 0 a23

a31 1 a33

∣∣∣∣∣∣ = −

∣∣∣∣∣∣0 a11 a13

0 a21 a23

1 0 0

∣∣∣∣∣∣ =

∣∣∣∣∣∣0 a11 a13

1 0 00 a21 a23

∣∣∣∣∣∣ = −

∣∣∣∣∣∣1 0 00 a11 a13

0 a21 a23

∣∣∣∣∣∣ = −det [A (3, 1)] .

6.3 The structure of Λk (V ∗)

Definition 6.27. Let m ∈ N and `jmj=1 ⊂ V∗, we define `1∧· · ·∧`m ∈ Am (V )

by

Page: 29 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 36: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

30 6 Alternating Multi-linear Functions

(`1 ∧ · · · ∧ `m) (v1, . . . , vm) = det

`1 (v1) . . . `1 (vm)`2 (v1) . . . `2 (vm)

......

...`m (v1) . . . `m (vm)

(6.6)

=∑σ∈Σm

(−1)σm∏i=1

`i (vσi) =∑σ∈Σm

(−1)σ`1 (vσ1) . . . `m (vσm) . (6.7)

or alternatively using detAtr = detA,

(`1 ∧ · · · ∧ `m) (v1, . . . , vm) = det

`1 (v1) `2 (v1) . . . `m (v1)`1 (v2) `2 (v2) . . . `m (v2)

......

......

`1 (vm) `2 (vm) . . . `m (vm)

(6.8)

=∑σ∈Σm

(−1)σm∏i=1

`σi (vi) =∑σ∈Σm

(−1)σ`σ1 (v1) . . . `σm (vm) . (6.9)

which may be written as,

`1 ∧ · · · ∧ `m =∑σ∈Σm

(−1)σ`σ1 ⊗ · · · ⊗ `σm. (6.10)

Remark 6.28. It is perhaps easier to remember these equations as

(`1 ∧ · · · ∧ `m) (v1, . . . , vm)

= det

`1 (v1, . . . , vm)`2 (v1, . . . , vm)

...`m (v1, . . . , vm)

and

= det

`1v1

v2

...vm

`2

v1

v2

...vm

. . . `m

v1

v2

...vm

where

` (v1, . . . , vm) :=[` (v1) . . . ` (vm)

]and

`

v1

v2

...vm

:=

` (v1)` (v2)

...` (vm)

.

Exercise 6.4. Let ei4i=1 be the standard basis for R4 and εi = e∗i 4i=1 be

the associated dual basis (i.e. εi (v) = vi for all v ∈ R4.) Compute;

1. ε3 ∧ ε2 ∧ ε4

1234

,

01−11

,

1032

,

2. ε3 ∧ ε2

1234

,

01−11

,

3. ε1 ∧ ε2

1234

,

01−11

,

4. (ε1 + ε3) ∧ ε2

1234

,

01−11

, and

5. ε4 ∧ ε3 ∧ ε2 ∧ ε1 (e1, e2, e3, e4) .

The next problem is a special case of Theorem 6.30 below.

Exercise 6.5. Show, using basic knowledge of determinants, that for`0, `1, `2, `3 ∈ V ∗, that

(`0 + `1) ∧ `2 ∧ `3 = `0 ∧ `2 ∧ `3 + `1 ∧ `2 ∧ `3.

Remark 6.29. Note that

`σ1 ∧ · · · ∧ `σm = (−1)σ`1 ∧ · · · ∧ `m

for all σ ∈ Σm and in particular if m = p+ q with p, q ∈ N, then

`p+1 ∧ · · · ∧ `m ∧ `1 ∧ · · · ∧ `p = (−1)pq`1 ∧ · · · ∧ `p ∧ `p+1 ∧ · · · ∧ `m.

Theorem 6.30. For any fixed `2, . . . , `k ∈ V ∗, the map,

V ∗ 3 `1 → `1 ∧ · · · ∧ `k ∈ Λk (V ∗)

is linear.

Proof. From Eq. (6.7) we find,

Page: 30 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 37: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

6.3 The structure of Λk (V ∗) 31((`1 + c˜1

)∧ · · · ∧ `k

)(v1, . . . , vk)

=∑σ∈Σk

(−1)σ(`1 + c˜1

)(vσ1) . . . `k (vσk)

=∑σ∈Σk

(−1)σ`1 (vσ1) . . . `k (vσk) + c

∑σ∈Σk

(−1)σ ˜

1 (vσ1) . . . `k (vσk)

= `1 ∧ · · · ∧ `k (v1, . . . , vk) + c · ˜1 ∧ · · · ∧ `k (v1, . . . , vk)

=(`1 ∧ · · · ∧ `k + c · ˜1 ∧ · · · ∧ `k

)(v1, . . . , vk) .

As this holds for all (v1, . . . , vk) , it follows that(`1 + c˜1

)∧ · · · ∧ `k = `1 ∧ · · · ∧ `k + c · ˜1 ∧ · · · ∧ `k

which is the desired linearity.

Remark 6.31. If W is another finite dimensional vector space and T : W → Vis a linear transformation, then T ∗ (`1 ∧ · · · ∧ `m) = (T ∗`1) ∧ · · · ∧ (T ∗`m) . Tosee this is the case, let wi ∈W for i ∈ [m] , then

T ∗ (`1 ∧ · · · ∧ `m) (w1, . . . , wm)

= (`1 ∧ · · · ∧ `m) (Tw1, . . . , Twm)

=∑σ∈Σm

(−1)σm∏i=1

`i (Twσi) =∑σ∈Σm

(−1)σm∏i=1

(T ∗`i) (wσi)

= (T ∗`1) ∧ · · · ∧ (T ∗`m) (w1, . . . , wm)

Example 6.32. Let T ∈ Λ2([R3]∗)

and v, w ∈ R3. Then

T (v, w) =T (v1e1 + v2e2 + v3e3, w1e1 + w2e2 + w3e3)

=T (e1, e2) (v1w2 − w1v2) + T (e1, e3) (v1w3 − w1v3)

+ T (e2, e3) (v2w3 − w2v3)

= [T (e1, e2) ε1 ∧ ε2 + T (e1, e3) ε1 ∧ ε3 + T (e2, e3) ε2 ∧ ε3] (v, w)

from this it follows that

T = T (e1, e2) ε1 ∧ ε2 + T (e1, e3) ε1 ∧ ε3 + T (e2, e3) ε2 ∧ ε3.

Further note that if

a12ε1 ∧ ε2 + a13ε1 ∧ ε3 + a23ε2 ∧ ε3 = 0

then evaluating this expression at (ei, ej) for 1 ≤ i < j ≤ 3 allows us to concludethat aij = 0 for 1 ≤ i ≤ j ≤ 3. Therefore εi ∧ εj : 1 ≤ i < j ≤ 3 is a basis for

Λ2([R3]∗)

. This example is generalized in the next theorem.

Theorem 6.33. Let eiNi=1 be a basis for V and εiNi=1 be its’ dual basis andfor

J = 1 ≤ a1 < a2 < · · · < ap ≤ N ⊂ [N ] ,

let with # (J) = p,

eJ :=(ea1 , . . . , eap

), and εJ := εa1 ∧ · · · ∧ εap . (6.11)

Then;

1. βp := εJ : J ⊂ [N ] with # (J) = p is a basis for Λp (V ∗) and so

dim (Λp (V ∗)) =(Np

), and

2. any A ∈ Λp (V ∗) admits the following expansions,

A =1

p!

N∑j1,...,jp=1

A(ej1 , . . . , ejp

)εj1 ∧ · · · ∧ εjp (6.12)

=∑J⊂[N ]

A (eJ) εJ . (6.13)

Proof. We begin by proving Eqs. (6.12) and (6.13). To this end letv1, . . . , vp ∈ V and then compute using the multi-linear and alternating prop-erties of A that

A (v1, . . . , vp) =

N∑j1,...,jp=1

εj1 (v1) . . . εjp (vp)A(ej1 , . . . , ejp

)(6.14)

=

N∑j1,...,jp=1

1

p!

∑σ∈Σp

εjσ1 (v1) . . . εjσp (vp)A(ejσ1 , . . . , ejσp

)=

N∑j1,...,jp=1

1

p!

∑σ∈Σp

(−1)σεj1 (vσ−11) . . . εjp

(vσ−1p

)A(ej1 , . . . , ejp

)=

1

p!

N∑j1,...,jp=1

A(ej1 , . . . , ejp

)εj1 ∧ · · · ∧ εjp (v1, . . . , vp) ,

which is Eq. (6.12). Alternatively we may write Eq. (6.14) as

Page: 31 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 38: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

A (v1, . . . , vp) =

N∑j1,...,jp=1

1#(j1,...,jp)=pεj1 (v1) . . . εjp (vp)A(ej1 , . . . , ejp

)=

∑1≤a1<a2<···<ap≤N

∑σ∈Σn

εaσ1 (v1) . . . εaσp (vp)A(eaσ1 , . . . , eaσp

)=

∑1≤a1<a2<···<ap≤N

A(ea1 , . . . , eap

) ∑σ∈Σn

(−1)σεaσ1 (v1) . . . εaσp (vp)

=∑

1≤a1<a2<···<ap≤N

A(ea1 , . . . , eap

)εa1 ∧ · · · ∧ εap (v1, . . . , vp)

=∑J⊂[N ]

A (eJ) εJ (v1, . . . , vp) .

which verifies Eq. (6.13) and hence item 2. is proved.To prove item 1., since (by Eq. (6.13) we know that βp spans Λp (V ∗) , it

suffices to show βp is linearly independent. The key point is that for

J = 1 ≤ a1 < a2 < · · · < ap ≤ N and

K = 1 ≤ b1 < b2 < · · · < bp ≤ N

we have

εJ (eK) = det

εa1 (eb1) . . . εa1

(ebp)

εa2 (eb1) . . . εa2(ebp)

......

...εap (eb1) . . . εap

(ebp) = δJ,K .

Thus if∑J⊂[N ] aJεJ = 0, then

0 = 0 (eK) =∑J⊂[N ]

aJεJ (eK) =∑J⊂[N ]

aJδJ,K = aK

which shows that aK = 0 for all K as above.

Exercise 6.6. Suppose `jkj=1 ⊂ [Rn]∗.

1. Explaining why `1 ∧ · · · ∧ `k = 0 if `i = `j for some i 6= j.

2. Show `1 ∧ · · · ∧ `k = 0 if `jkj=1 are linear dependent. [You may assume

that `1 =∑kj=2 aj`j for some aj ∈ R.]

Exercise 6.7. If `jkj=1 ⊂ [Rn]∗

are linearly independent, show

`1 ∧ · · · ∧ `k 6= 0.

Hint: make use of Exercise 5.4.

Page 39: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

7

Exterior/Wedge and Interior Products

The main goal of this chapter is to define a good notion of how to multiplytwo alternating multi-linear forms. The multiplication will be referred to asthe “wedge product.” Here is the result we wish to prove whose proof will bedelayed until Section 7.4.

Theorem 7.1. Let V be a finite dimensional vector space, n = dim (V ) , p, q ∈[n] , and let m = p+ q. Then there is a unique bilinear map,

Mp,q : Λp (V ∗)× Λq (V ∗)→ Λm (V ∗) ,

such that for any fipi=1 ⊂ V ∗ and gjqj=1 ⊂ V∗, we have,

Mp,q (f1 ∧ · · · ∧ fp, g1 ∧ · · · ∧ gq) = f1 ∧ · · · ∧ fp ∧ g1 ∧ · · · ∧ gq. (7.1)

The notation, Mp,q, in the previous theorem is a bit bulky and so we intro-duce the following (also temporary) notation.

Notation 7.2 (Preliminary) For A ∈ Λp (V ∗) and B ∈ Λq (V ∗) , let us sim-ply denote Mp,q (A,B) by A ·B.1

Remark 7.3. If m = p+ q > n, then Λm (V ∗) = 0 and hence A ·B = 0.

7.1 Consequences of Theorem 7.1

Before going to the proof of Theorem 7.1 (see Section 7.4) let us work out someof its consequences. By Theorem 6.33 it is always possible to write A ∈ Λp (V ∗)in the form

A =

α∑i=1

aifi1 ∧ · · · ∧ f ip (7.2)

for some α ∈ N, aiNi=1 ⊂ R, andf ij : j ∈ [p] and i ∈ [α]

⊂ V ∗. Similarly we

may write B ∈ Λq (V ∗) in the form,

1 We will see shortly that it is reasonable and more suggestive to write A∧B ratherthan A ·B. We will make this change after it is justified, see Notation 7.7 below.

B =

β∑j=1

bjgj1 ∧ · · · ∧ gjq (7.3)

for some β ∈ N, bjNj=1 ⊂ R, andgjj : j ∈ [q] and j ∈ [β]

⊂ V ∗. Thus by

Theorem 7.1 we must have

A ·B = Mp,q (A,B) =

α∑i=1

β∑j=1

aibjMp,q

(f i1 ∧ · · · ∧ f ip, g

j1 ∧ · · · ∧ gjq

)

=

α∑i=1

β∑j=1

aibjfi1 ∧ · · · ∧ f ip ∧ g

j1 ∧ · · · ∧ gjq . (7.4)

Proposition 7.4 (Associativity). If A ∈ Λp (V ∗) , B ∈ Λq (V ∗) , and C ∈Λr (V ∗) for some r ∈ [n] , then

(A ·B) · C = A · (B · C) . (7.5)

Proof. Let us express C as

C =

γ∑k=1

ckhk1 ∧ · · · ∧ hkr .

Then working as above we find with the aid of Eq. (7.4) that

(A ·B) · C =

α∑i=1

β∑j=1

γ∑k=1

aibjckfi1 ∧ · · · ∧ f ip ∧ g

j1 ∧ · · · ∧ gjq ∧ hk1 ∧ · · · ∧ hkr .

A completely analogous computation then shows that A · (B · C) is also givenby the right side of the previously displayed equation and so Eq. (7.5) is proved.

Remark 7.5. Since our multiplication rule is associative it now makes sense tosimply write A · B · C rather than (A ·B) · C or A · (B · C) . More generally ifAj ∈ Λpj (V ∗) we may now simply write A1 · · · · ·Ak. For example by the aboveassociativity we may easily show,

Page 40: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

34 7 Exterior/Wedge and Interior Products

A · (B · (C ·D)) = (A ·B) · (C ·D) = ((A ·B) · C) ·D

and so it makes sense to simply write A·B ·C ·D for any one of these expressions.

Corollary 7.6. If `jpj=1 ⊂ V∗, then

`1 · · · · · `p = `1 ∧ · · · ∧ `p.

Proof. For clarity of the argument let us suppose that p = 5 in which casewe have

`1 · `2 · `3 · `4 · `5 = `1 · (`2 · (`3 · (`4 · `5)))

= `1 · (`2 · (`3 · (`4 ∧ `5)))

= `1 · (`2 · (`3 ∧ `4 ∧ `5))

= `1 · (`2 ∧ `3 ∧ `4 ∧ `5)

= `1 ∧ `2 ∧ `3 ∧ `4 ∧ `5.

Because of Corollary 7.6 there is no longer any danger in denoting A ·B =Mp,q (A,B) by A ∧ B. Moreover, this notation suggestively leads one to thecorrect multiplication formulas.

Notation 7.7 (Wedge=Exterior Product) For A ∈ Λp (V ∗) and B ∈Λq (V ∗) , we will from now on denote Mp,q (A,B) by A ∧B.

Although the wedge product is associative, one must be careful to observethat the wedge product is not commutative, i.e. groupings do not matter butorder may matter.

Lemma 7.8 (Non-commutativity). For A ∈ Λp (V ∗) and B ∈ Λq (V ∗) wehave

A ∧B = (−1)pqB ∧A.

Proof. See Remark 6.29.

Example 7.9. Suppose that εj5j=1 is the standard dual basis on R5 and

α = 2ε1 − 3ε3, β = ε2 ∧ ε4 + (ε1 + ε3) ∧ ε5.

Find and simplify formulas for α ∧ α, α ∧ β and β ∧ β.

1. α ∧ α = 0 since α ∧ α = −α ∧ α.

2.

α ∧ β = (2ε1 − 3ε3) ∧ ε2 ∧ ε4 + (2ε1 − 3ε3) ∧ (ε1 + ε3) ∧ ε5

= 2ε1 ∧ ε2 ∧ ε4 + 3ε2 ∧ ε3 ∧ ε4

+ 2ε1 ∧ (ε1 + ε3) ∧ ε5 − 3ε3 ∧ (ε1 + ε3) ∧ ε5

= 2ε1 ∧ ε2 ∧ ε4 + 3ε2 ∧ ε3 ∧ ε4 + 2ε1 ∧ ε3 ∧ ε5 + 3ε1 ∧ ε3 ∧ ε5

= 2ε1 ∧ ε2 ∧ ε4 + 3ε2 ∧ ε3 ∧ ε4 + 5ε1 ∧ ε3 ∧ ε5.

3. Finally,

β ∧ β = [ε2 ∧ ε4 + (ε1 + ε3) ∧ ε5] ∧ [ε2 ∧ ε4 + (ε1 + ε3) ∧ ε5]

= ε2 ∧ ε4 ∧ (ε1 + ε3) ∧ ε5 + (ε1 + ε3) ∧ ε5 ∧ ε2 ∧ ε4

= ε2 ∧ ε4 ∧ ε1 ∧ ε5 + ε2 ∧ ε4 ∧ ε3 ∧ ε5

+ ε1 ∧ ε5 ∧ ε2 ∧ ε4 + ε3 ∧ ε5 ∧ ε2 ∧ ε4

= ε1 ∧ ε2 ∧ ε4 ∧ ε1 ∧ ε5 − ε2 ∧ ε3 ∧ ε4 ∧ ε5

+ ε1 ∧ ε2 ∧ ε4 ∧ ε5 − ε2 ∧ ε3 ∧ ε4 ∧ ε5.

Theorem 7.10 (Pull-Backs and Wedges). If A : V →W is a linear trans-formation, ω ∈ Λk (W ∗) , and η ∈ Λl (W ∗) , then

A∗ (ω ∧ η) = A∗ω ∧A∗η (7.6)

and in particular if `1, . . . , `k ∈W ∗, then

A∗ [`1 ∧ · · · ∧ `k] = A∗`1 ∧ · · · ∧A∗`k. (7.7)

Proof. Equation (7.6) follows directly from Eq. (7.18) used below in theproof of Theorem 7.1. Equation (7.7) then follows from Eq. (7.6) by inductionon k. However, not wanting to use the proof of Theorem 7.1 in this proof wewill give another proof which only use the material presented so far.

To prove Eq. (7.7), simply let viki=1 ⊂ V and compute

A∗ [`1 ∧ · · · ∧ `k] (v1, . . . , Ak) = `1 ∧ · · · ∧ `k (Av1, . . . , Avk)

= det [`i (Avj)] = det [A∗`i (vj)]= (A∗`1 ∧ · · · ∧A∗`k) (v1, . . . , vk) .

As this is true for all (v1, . . . , vk) ∈ V k, Eq. (7.7) follows.Since both sides of Eq. (7.6) are bilinear functions of ω and η, it suffices to

verify Eq. (7.6) in the special case where

ω = `1 ∧ · · · ∧ `k and η = f1 ∧ · · · ∧ fl

for some `1, . . . , `k, f1, . . . fl ∈ W ∗. However this is now simply done using Eq.(7.7),

Page: 34 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 41: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

7.2 Interior product 35

A∗ (ω ∧ η) = A∗ (`1 ∧ · · · ∧ `k ∧ f1 ∧ · · · ∧ fl)= A∗`1 ∧ · · · ∧A∗`k ∧A∗f1 ∧ · · · ∧A∗fl= [A∗`1 ∧ · · · ∧A∗`k] ∧ [A∗f1 ∧ · · · ∧A∗fl]= A∗ω ∧A∗η.

7.2 Interior product

There is yet one more product structure on Λm (V ∗) that we will used through-out these notes given in the following definition.

Definition 7.11 (Interior product). For v ∈ V and T ∈ Λm (V ∗) , let ivT ∈Λm−1 (V ∗) be defined by ivT = T (v, · · · ) .

Lemma 7.12. If `imi=1 ⊂ V ∗, T = `1 ∧ · · · ∧ `m, and v ∈ V, then

iv (`1 ∧ · · · ∧ `m) =

m∑j=1

(−1)j−1

`j (v) · `1 ∧ · · · ∧ j ∧ · · · ∧ `m. (7.8)

Proof. Expanding the determinant along its first column we find,

T (v1, . . ., vm) =

∣∣∣∣∣∣∣∣∣`1 (v1) . . . `1 (vm)`2 (v1) . . . `2 (vm)

......

...`m (v1) . . . `m (vm)

∣∣∣∣∣∣∣∣∣

=

m∑j=1

(−1)j−1

`j (v1) ·

∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣

`1 (v2) . . . `1 (vm)`2 (v2) . . . `2 (vm)

......

...`j−1 (v2) . . . `j−1 (vm)`j+1 (v2) . . . `j+1 (vm)

......

...`m (v2) . . . `m (vm)

∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣=

m∑j=1

(−1)j−1

`j (v1)(`1 ∧ · · · ∧ j ∧ · · · ∧ `m) (v2, . . . , vm)

from which Eq. (7.8) follows.

Example 7.13. Let us work through the above proof when m = 3. Letting T =`1 ∧ `2 ∧ `3 we have

T (v1, v2, v3) =

∣∣∣∣∣∣`1 (v1) `1 (v2) `1 (v3)`2 (v1) `2 (v2) `2 (v3)`3 (v1) `3 (v2) `3 (v3)

∣∣∣∣∣∣=`1 (v1)

∣∣∣∣ `2 (v2) `2 (v3)`3 (v2) `3 (v3)

∣∣∣∣− `2 (v1)

∣∣∣∣ `1 (v2) `1 (v3)`3 (v2) `3 (v3)

∣∣∣∣+ `3 (v1)

∣∣∣∣ `1 (v2) `1 (v3)`2 (v2) `2 (v3)

∣∣∣∣= (`1 (v1) `2 ∧ `3 − `2 (v1) `1 ∧ `3 + `3 (v1) `1 ∧ `2) (v2, v3)

and so

iv1 (`1 ∧ `2 ∧ `3) = `1 (v1) `2 ∧ `3 − `2 (v1) `1 ∧ `3 + `3 (v1) `1 ∧ `2.

Exercise 7.1. Let εj3j=1 be the standard dual basis and v = (1, 2, 3)tr ∈ R3,

find a1, a2, a3 ∈ R so that

iv (ε1 ∧ ε2 ∧ ε3) = a1ε2 ∧ ε3 + a2ε1 ∧ ε3 + a3ε1 ∧ ε2.

Corollary 7.14. For A ∈ Λp (V ∗) and B ∈ Λq (V ∗) and v ∈ V, we have

iv [A ∧B] = (ivA) ∧B + (−1)pA ∧ (ivB) .

Proof. It suffices to verify this identity on decomposable forms, A = `1 ∧· · · ∧ `p and B = `p+1 ∧ · · · ∧ `m so that A ∧B = `1 ∧ · · · ∧ `m and we have

iv (A ∧B)

=

m∑j=1

(−1)j−1

`j (v) · `1 ∧ · · · ∧ j ∧ · · · ∧ `m=

p∑j=1

(−1)j−1

`j (v) · `1 ∧ · · · ∧ j ∧ . . . `p ∧ `p+1 ∧ · · · ∧ `m

+

m∑j=p+1

(−1)j−1

`j (v) · `1 ∧ . . . `p ∧ `p+1 ∧ · · · ∧ j ∧ · · · ∧ `m=: T1 + T2

where

T1 =

p∑j=1

(−1)j−1

`j (v) · `1 ∧ · · · ∧ j ∧ . . . `p ∧B = (ivA) ∧B

Page: 35 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 42: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

36 7 Exterior/Wedge and Interior Products

and

T2 = A ∧

m∑j=p+1

(−1)j−1

`j (v) `p+1 ∧ · · · ∧ j ∧ · · · ∧ `m

= (−1)pA ∧

m∑j=p+1

(−1)j−(p+1)

`j (v) `p+1 ∧ · · · ∧ j ∧ · · · ∧ `m

= (−1)pA ∧ (ivB) .

Lemma 7.15. If v, w ∈ V, then i2v = 0 and iviw = −iwiv.Proof. Let T ∈ Λk (V ∗) , then

iviwT = T (w, v,—) = T (v, w,—) = iwivT.

Definition 7.16 (Cross product on R3). For a, b ∈ R3, let a×b be the uniquevector in R3 so that

det [c|a|b] = c · (a× b) for all c ∈ R3.

Such a unique vector exists since we know that c→ det [c|a|b] is a linear func-tional on R3 for each a, b ∈ R3.

Lemma 7.17 (Cross product). The cross product in Definition 7.16 agreeswith the “usual definition,

a× b =

∣∣∣∣∣∣i j ka1 a2 a3

b1 b2 b3

∣∣∣∣∣∣=: i

∣∣∣∣a2 a3

b2 b3

∣∣∣∣− j

∣∣∣∣a1 a3

b1 b3

∣∣∣∣+ k

∣∣∣∣a1 a2

b1 b2

∣∣∣∣ ,where i = e1, j = e2, and k = e3 is the standard basis for R3.

Proof. Suppose that a× b is defined by the formula in the lemma, then forall c ∈ R,

(a× b) · c = c1

∣∣∣∣a2 a3

b2 b3

∣∣∣∣− c2 ∣∣∣∣a1 a3

b1 b3

∣∣∣∣+ c3

∣∣∣∣a1 a2

b1 b2

∣∣∣∣=

∣∣∣∣∣∣c1 c2 c3a1 a2 a3

b1 b2 b3

∣∣∣∣∣∣ = det [c|a|b] ,

wherein we have used the cofactor expansion along the top row for the secondequality and the fact that detA = detAtr for the last equality.

Remark 7.18 (Generalized Cross product). If a1, a2 . . . , an−1 ∈ Rn, let a1×a2×· · · × an−1 denote the unique vector in Rn such that

det [c|a1|a2| . . . |an−1] = c · a1 × a2 × · · · × an−1 ∀ c ∈ Rn.

This “multi-product” is the n > 3 analogue of the cross product in R3. I don’tanticipate using this generalized cross product.

7.3 Exercises

Exercise 7.2 (Cross I). For a ∈ R3, let `a (v) = a · v = atrv, so that `a ∈(R3)∗. In particular we have εi = `ei for i ∈ [3] is the dual basis to the standard

basis ei3i=1 . Show for a, b ∈ R3,

`a ∧ `b = ia×b [ε1 ∧ ε2 ∧ ε3] (7.9)

Hints: 1) write `a =∑3i=1 aiεi and 2) make use of Eq. (7.8)

Exercise 7.3 (Cross II). Use Exercise 7.2 to prove the standard vector cal-culus identity;

(a× b) · (x× y) = (a · x) (b · y)− (b · x) (a · y)

which is valid for all a, b, x, y ∈ R3.Hint: evaluate Eq. (7.9) at (x, y) while usingLemma 7.17.

Exercise 7.4 (Surface Integrals). In this exercise, let ω ∈ A3

(R3)

be thestandard volume form, ω (v1, v2, v3) := det [v1|v2|v3] , suppose D is an opensubset of R2, and Σ : D → S ⊂ R3 is a “parameterized surface,” refer to Figure7.1. If F : R3 → R3 is a vector field on R3, then from your vector calculus class,∫∫

S

F ·NdA = ε ·∫∫

D

F (Σ (u, v)) · [Σu (u, v)×Σv (u, v)] dudv (7.10)

where ε = 1 (ε = −1) if N (Σ (u, v)) points in the same (opposite) direction asΣu (u, v)×Σv (u, v) . We assume that ε is independent of (u, v) ∈ D.

Show the formula in Eq. (7.10) may be rewritten as∫∫S

F ·NdA = ε

∫∫D

(iF (Σ(u,v))ω

)(Σu (u, v) , Σv (u, v)) dudv (7.11)

where

Page: 36 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 43: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

7.4 *Proof of Theorem 7.1 37

Fig. 7.1. In this figure N is a smoothly varying normal to S, n is a normal tothe boundary of S, and T is a tangential vector to the boundary of S. Moreover,D 3 (u, v)→ Σ (u, v) ∈ S is a parametrization of S where D ⊂ R2.

ε := sgn(ω (N Σ,Σu, Σv)) =

1 if ω (N Σ,Σu, Σv) > 0−1 if ω (N Σ,Σu, Σv) < 0.

Remarks: Once we introduce the proper notation, we will be able to writeEq. (7.11) more succinctly as∫∫

S

F ·NdA =

∫∫S

iFω := ε

∫∫D

Σ∗ (iFω) .

Definition 7.19 (Curl). If F : R3 → R3 is a vector field on R3, we define anew vector field called the curl of F by

∇× F = (∂2F3 − ∂3F2) e1 − (∂1F3 − ∂3F1) e2 + (∂1F2 − ∂2F1) e3 (7.12)

where e1, e2, e3 is the standard basis for R3. This usually remembered by thefollowing mnemonic formulas;

∇× F = det

e1 e2 e3

∂1 ∂2 ∂3

F1 F2 F3

= e1 det

[∂2 ∂3

F2 F3

]− e2 det

[∂1 ∂3

F1 F3

]+ e3 det

[∂1 ∂2

F1 F2

].

Exercise 7.5 (Boundary Orientation). Referring to the set up in Exercise7.4, the tangent vector T has been chosen by using the “right-hand” rule inorder to determine the orientation on the boundary, ∂S, of S so that Stoke’stheorem holds, i.e. ∫∫

S

[∇× F ] ·NdA =

∫∂S

F · Tds. (7.13)

Show by using the “right hand rule” that T = c ·N × n with c > 0 and thenalso show

c = ω (N,n, T ) = (iniNω) (T ) .

Also note by Exercise 7.4, that Eq. (7.13) may be written as∫∫S

i∇×Fω =

∫∂S

F · Tds (7.14)

Remark: We will introduce the “one form”, F ·dx and an “exterior deriva-tive” operator, d, so that

d [F · dx] = i∇×Fω

and Eq. (7.14) may be written in the pleasant form,∫∫S

d [F · dx] =

∫∂S

F · dx.

7.4 *Proof of Theorem 7.1

[This section may safely be skipped if you are willing to believe the results asstated!]

If Theorem 7.1 is going to be true we must have Mp,q (A,B) = A · B = Dwhere, as written in Eq. (7.4),

D =

α∑i=1

β∑j=1

aibjfi1 ∧ · · · ∧ f ip ∧ g

j1 ∧ · · · ∧ gjq . (7.15)

The problem with this presumed definition is that the formula for D in Eq.(7.15) seems to depend on the expansions of A and B in Eqs. (7.2) and (7.3)rather than on only A and B. [The expansions for A and B in Eqs. (7.2)and (7.3) are highly non-unique!] In order to see that D is independent of thepossible choices of expansions of A and B, we are going to show in Proposition7.23 below that D (v1, . . . , vm) (with D as in Eq. (7.15)) may be expressed by aformula which only involves A and B and not their expansions. Before gettingto this proposition we need some more notation and a preliminary lemma.

Notation 7.20 Let m = p + q be as in Theorem 7.1 and let vimi=1 ⊂ V befixed. For each J ⊂ [m] with #J = p write

J = 1 ≤ a1 < a2 < · · · < ap ≤ m ,Jc = 1 ≤ b1 < b2 < · · · < bq ≤ m ,vJ :=

(va1 , . . . , vap

), and vJc :=

(vb1 , . . . , vbq

).

Page: 37 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 44: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

38 7 Exterior/Wedge and Interior Products

Also for any α ∈ Σp and β ∈ Σq, let

σJ,α,β =

(1 . . . p p+ 1 . . . maα1 . . . aαp bβ1 . . . bβq

).

When α and β are the identity permutations in Σp and Σq respectively we willsimply denote σJ,α,β by σJ , i.e.

σJ =

(1 . . . p p+ 1 . . . ma1 . . . ap b1 . . . bq

).

The point of this notation is contained in the following lemma.

Lemma 7.21. Assuming Notation 7.20,

1. the map,Pp,m ×Σp ×Σq 3 (J, α, β)→ σJ,α,β ∈ Σm,

is a bijection, and2. (−1)

σJ,α,β = (−1)σJ (−1)

α(−1)

β.

Proof. We leave proof of these assertions to the reader.

Lemma 7.22 (Wedge Product I). Let n = dimV, p, q ∈ [n] , m := p + q,fipi=1 ⊂ V ∗, gj

qj=1 ⊂ V

∗, and vjmj=1 ⊂ V, then

(f1 ∧ · · · ∧ fp ∧ g1 ∧ · · · ∧ gq) (v1, . . . , vm)

=∑

#J=p

(−1)σJ (f1 ∧ · · · ∧ fp) (vJ) (g1 ∧ · · · ∧ gq) (vJc) . (7.16)

Proof. In order to simplify notation in the proof let, `i = fi for 1 ≤ i ≤ pand `j+p = gj for 1 ≤ j ≤ q so that

f1 ∧ · · · ∧ fp ∧ g1 ∧ · · · ∧ gq = `1 ∧ · · · ∧ `m.

Then by Definition 6.27 of `1 ∧ · · · ∧ `m along with Lemma 7.21, we find,

(`1 ∧ · · · ∧ `m) (v1, . . . , vm)

= det[`i (vj)mi,j=1

]=∑σ∈Σm

(−1)σm∏i=1

`i (vσi) .

=∑J

∑α∈Σp

∑β∈Σq

(−1)σJ,α,β

m∏i=1

`i(vσJ,α,βi

)=∑J

(−1)σJ∑α∈Σp

∑β∈Σq

(−1)σα

p∏i=1

`i(vσJ,α,βi

)(−1)

σβm∏

i=p+1

`i(vσJ,α,βi

).

Combining this with the following identity,∑α∈Σp

∑β∈Σq

(−1)σα

p∏i=1

`i(vσJ,α,βi

)(−1)

σβm∏

i=p+1

`i(vσJ,α,βi

)=∑α∈Σp

(−1)σα

p∏i=1

`i (vaαi)∑β∈Σq

(−1)σβ

m∏i=p+1

`i(vbβi

)= (`1 ∧ · · · ∧ `p) (vJ) (`p+1 ∧ · · · ∧ `m) (vJc)

= (f1 ∧ · · · ∧ fp) (vJ) (g1 ∧ · · · ∧ gq) (vJc)

completes the proof.

Proposition 7.23 (Wedge Product II). If A ∈ Λp (V ∗) and B ∈ Λq (V ∗)are written as in Eqs. (7.2–7.3) and D ∈ Λm (V ∗) is defined as in Eq. (7.15),then

D (v1, . . . , vm) =∑

#J=p

(−1)σJ A (vJ)B (vJc) ∀ vjmj=1 ⊂ V. (7.17)

This shows defining A ∧ B by Eq. (7.4)is well defined and in fact could havebeen defined intrinsically using the formula,

A ∧B (v1, . . . , vm) =∑

#J=p

(−1)σJ A (vJ)B (vJc) . (7.18)

Proof. By Lemma 7.22,

f i1 ∧ · · · ∧ f ip ∧ gj1 ∧ · · · ∧ gjq (v1, . . . , vm)

=∑

#J=p

(−1)σJ(f i1 ∧ · · · ∧ f ip

)(vJ) ·

(gj1 ∧ · · · ∧ gjq

)(vJc)

and therefore,

D (v1, . . . , vm)

=

α∑i=1

β∑j=1

aibj

(f i1 ∧ · · · ∧ f ip ∧ g

j1 ∧ · · · ∧ gjq

)(v1, . . . , vm)

=

α∑i=1

β∑j=1

aibj∑

#J=p

(−1)σJ(f i1 ∧ · · · ∧ f ip

)(vJ) ·

(gj1 ∧ · · · ∧ gjq

)(vJc)

=∑

#J=p

(−1)σJ

α∑i=1

β∑j=1

ai(f i1 ∧ · · · ∧ f ip

)(vJ) ·

β∑j=1

bj

(gj1 ∧ · · · ∧ gjq

)(vJc)

=∑

#J=p

(−1)σJ A (vJ)B (vJc)

Page: 38 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 45: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

which proves Eq. (7.17) and completes the proof of the proposition.With all of this preparation we are now in a position to complete the proof

of Theorem 7.1.Proof of Theorem 7.1. As we have see we may define A ∧ B by either

Eq. (7.4) or by Eq. (7.18). Equation (7.18) ensures A ∧ B is well defined andis multi-linear while Eq. (7.4) ensures A ∧ B ∈ Λm (V ∗) and that Eq. (7.1)holds. This proves the existence assertion of the theorem. The uniqueness ofMp,q (A,B) = A ∧B follows by the necessity of defining A ∧B by Eq. (7.4).

Corollary 7.24. Suppose that ejnj=1 is a basis of V and εjnj=1 is its dual

basis of V ∗. Then for A ∈ Λp (V ∗) and B ∈ Λq (V ∗) we have

A ∧B =1

p! · q!

n∑j1,...,jm=1

A(ej1 , . . . , ejp

)B(ejp+1 , . . . , ejm

)εj1 ∧ . . . · · · ∧ εjm .

(7.19)

Proof. By Theorem 6.33 we may write,

A =1

p!

N∑j1,...,jp=1

A(ej1 , . . . , ejp

)εj1 ∧ · · · ∧ εjp and

B =1

q!

n∑jp+1,...,jm=1

B(ejp+1

, . . . , ejm)εjp+1

∧ . . . · · · ∧ εjm

and therefore Eq. (7.19) holds by computing A ∧B as in Eq. (7.4).

Page 46: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 47: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part III

Differential Forms on U ⊂o Rn

Page 48: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 49: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

8

Derivatives, Tangent Spaces, and Differential Forms

In this chapter we will develop calculus and the language of differential formson open subsets of Euclidean space in such a way that our result will transferto the more general manifold setting.

8.1 Derivatives and Chain Rules

Notation 8.1 (Open subset) I use the symbol “⊂o” to denote containmentwith the smaller set being open in the bigger. Thus writing U ⊂o Rn means Uis an open subset of Rn which we always assume to be non-empty.

Notation 8.2 For U ⊂o Rn, we write f : U → Rm as short hand for sayingthat f is a function from U to Rm, thus for each x ∈ U,

f (x) =

f1 (x)f2 (x)

...fm (x)

where fi : U → R for each i ∈ [m] .

Definition 8.3 (Directional Derivatives). Suppose U ⊂o Rn that f : U →Rm is a function, so For p ∈ U and v ∈ Rn, let

(∂vf) (p) :=d

dt|0f (p+ tv)

be the directional derivative1 of f at p in the direction v. By definition,the jth-partial derivative of f at p, is

∂f

∂xj(p) =

(∂ejf

)(p) =

d

dt|0f (p+ tej) .

where ejnj=1 is the standard basis for Rn. We will also write ∂jf for ∂f∂xj

=

∂ejf.

1 We use this terminology even though no assumption about v being a unit vector isbeing made.

Definition 8.4. A function, f : U → R, is smooth if f has partial derivativesto all orders and all of these partial derivatives are continuous. We say f : U →Rm is smooth if each of the functions, fi : U → R, are smooth functions.

Notation 8.5 We let C∞ (U,Rm) denote the smooth functions from U to Rm.When m = 1 we will also write C∞ (U,R) = Ω0 (U) and refer to these as thesmooth 0-forms on U. We also let Ck (U,Rm) denote those f : U → Rm suchthat each coordinate function, fi, has partial derivatives to order k an all ofthese partial derivatives are continuous.

Let us recall a some version of the chain rule.

Theorem 8.6. If f ∈ C1 (U,Rm) , p ∈ U, and v = (v1, . . . , vn)tr ∈ Rn, then

(∂vf) (p) =

n∑j=1

∂f

∂xj(p) vj = f ′ (p) v

where f ′ (p) = Df (p) is the m× n matrix defined by

f ′ (p) =

[∂f

∂p1(p) | ∂f

∂p2(p) | . . . | ∂f

∂pn(p)

]

=

∂1f1 (p)∂1f2 (p)

...∂1fm (p)

∂2f1 (p)∂2f2 (p)

...∂2fm (p)

. . .

. . .

. . .

. . .

∂nf1 (p)∂nf2 (p)

...∂nfm (p)

We refer to f ′ (p) = Df (p) as the differential of f at p.

More generally, if, σ : (−ε, ε) → U is a curve in U such that σ (0) =ddt |0σ (t) ∈ Rn exists, then

d

dt|0f (σ (t)) =

(∂σ(0)f

)(σ (0)) = f ′ (σ (0)) σ (0)

=

n∑j=1

∂f

∂xj(σ (0)) σj (0) . (8.1)

Page 50: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

44 8 Derivatives, Tangent Spaces, and Differential Forms

Example 8.7. If

f

(x1

x2

)=

x1x2

sin (x1)x2e

x1

then

f ′(x1

x2

)=

x2 x1

cos (x1) 0x2e

x1 ex1

.Example 8.8. Let

p =

[01

], v =

[π11

], and

f

(xy

)=

[yex

x2 + y2

].

Then

f ′(xy

)=

[yex ex

2x 2y

],

f ′ (p) =

[1 10 2

], and

(∂vf) (p) =

[1 10 2

] [π11

]=

[π + 11

22

].

Exercise 8.1. Let

f

(rθ

)=

[r cos θr sin θ

]for

(rθ

)∈ R2.

Find;

f ′(rθ

)and then show det

[f ′(rθ

)]= r.

Exercise 8.2. Let

f

rθϕ

=

r sinϕ · cos θr sinϕ · sin θr cosϕ

for

rθϕ

∈ R3.

Find;

f ′

rθϕ

and then show det

f ′ rθϕ

= −r2 sinϕ.

The following rewriting of the chain rule is often useful for computing di-rectional derivatives.

Lemma 8.9 (Chain Rule II). Let 0 ∈ U ⊂o Rn and f : U → Rm be smoothfunction. Then

d

dt|0f (t, t, . . . , t) =

n∑j=1

d

dt|0f (tej) =

n∑j=1

d

dt|0f(

0, . . . , 0,j position

t , 0, . . . , 0

).

Proof. Let σ (t) = (t, t, . . . , t)tr, then by the chain rule,

d

dt|0f (t, t, . . . , t) =

d

dt|0f (σ (t)) = f ′ (σ (0)) σ (0)

= f ′ (0) [e1 + · · ·+ en]

=

n∑j=1

(∂ejf

)(0) =

n∑j=1

d

dt|0f (tej) .

Exercise 8.3. Let

A =

A11 A12 . . . A1n

A21 A22 . . . A2n

......

. . ....

An1 An2 . . . Ann

= [a1| . . . |an]

be an n× n matrix with ith-column

ai =

A1i

A2i

...Ani

.Given another n× n matrix B with analogous notation, show

(∂B det) (A) =

n∑j=1

det [a1| . . . |aj−1|bj |aj+1| . . . |bn] . (8.2)

For example if n = 3, this formula reads,

(∂B det) (A) = det [b1|a2|a3] + det [a1|b2|a3] + det [a1|a2|b3] .

Suggestions; by definition,

Page: 44 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 51: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

8.2 Tangent Spaces and More Chain Rules 45

(∂B det) (A) :=d

dt|0 det (A+ tB) =

d

dt|0 det [a1 + tb1| . . . |an + tbn] .

Now apply Lemma 8.9 with

f (x1, . . . , xn) = det [a1 + x1b1| . . . |an + xnbn] .

Exercise 8.4 (Exercise 8.3 continued). Continuing the notation and resultsfrom Exercise 8.3, show;

1. If A = I is the n× n identity matrix in Eq. (8.2), then

(∂B det) (I) = tr (B) =

n∑j=1

Bj,j .

2. If A is an n× n invertible matrix, shows

(∂B det) (A) = det (A) · tr(A−1B

).

Hint: Verify the identity,

det (A+ tB) = det (A) · det(I + tA−1B

)which you should then use along with first item of this exercise.

Corollary 8.10. If A is an n× n matrix, then det(eA)

= etr(A).

Proof. Let f (t) := det(etA), then

f (t) =d

ds|0f (t+ s) =

d

ds|0 det

(e(t+s)A

)=

d

ds|0 det

(etAesA

)= det

(etA) dds|0 det

(esA)

= f (t)(∂ dds |0esA

det) (e0A)

= f (t) (∂A det) (I)

= f (t) tr (A) with f (0) = det (I) = 1.

Solving this differential equation then shows,

det(etA)

= et·tr(A).

8.2 Tangent Spaces and More Chain Rules

Definition 8.11 (Tangent space). To each open set, U ⊂o Rn, let

TU := U × Rn = vp = (p, v) : p ∈ U and v ∈ Rn .

For a given p ∈ U, we let

TpU = vp = (p, v) : v ∈ Rn

and refer this as the tangent space to U at p. Note that

TU = ∪p∈UTpU.

For vp, wp ∈ TpU and λ ∈ C we define,

vp + λwp := (v + λw)p

which makes TpU into a vector space isomorphic to Rn.

Notation 8.12 (Cotangent spaces) For p ∈ Rn, let T ∗pU := [TpU ]∗

be thedual space to TpU.

Definition 8.13. If f ∈ C∞ (U,Rm) and vp ∈ TpU let

df (vp) := (∂vf) (p) = f ′ (p) v.

We call df the differential of f and further write dfp for df |TpU ∈ [TpU ]∗.

We will mostly (probably exclusively) use the df notation in the case wherem = 1.

Example 8.14. Let f (x1, x2) = x1x22, then

f ′(x1

x2

)=[x2

2 2x1x2

]=⇒ f ′

(p1

p2

)=[p2

2 2p1p2

].

Therefore,

df (vp) =[p2

2 2p1p2

] [ v1

v2

]= p2

2v1 + 2p1p2v2.

Notation 8.15 (Coordinate functions) Note well: from now on we willusually consider x = (x1, . . . , xn)

trto be the identity function from Rn to Rn

rather than a point in Rn, i.e. if p = (p1, p2, . . . , pn)tr

then xi (p) = pi. We stillhowever write

∂f

∂xi(p) := ∂if (p) = (∂eif) (p)

Page: 45 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 52: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

46 8 Derivatives, Tangent Spaces, and Differential Forms

Example 8.16. We have for each i ∈ [n]

dxi (vp) = (∂vxi) (p) =d

dt|0xi (p+ tv) =

d

dt|0 (pi + tvi) = vi.

Proposition 8.17. If f ∈ Ω0 (U) , then

df =

n∑i=1

∂f

∂xidxi

where the right side of this equation evaluated at vp is by definition,(n∑i=1

∂f

∂xidxi

)(vp) =

n∑i=1

∂f

∂xi(p) · dxi (vp)

Proof. By definition and the chain rule,

df (vp) = (∂vf) (p) =

n∑i=1

∂f

∂xi(p) vi =

n∑i=1

∂f

∂xi(p) · dxi (vp) .

Exercise 8.5. Using Proposition 8.17, find df when

f (x1, x2, x3) = x21 sin (ex2) + cos (x3) .

Lemma 8.18 (Product Rule). Suppose that f, g ∈ C∞ (U) , then d (fg) =fdg + gdf which in more detail means,

d (fg) (vp) = f (p) dg (vp) + g (p) df (vp) for all vp ∈ TU.

[You are asked to generalize this result in Exercise 8.6.]

Proof. This is the product rule. Here are two ways to prove this result.

1. The first method used the product rule for directional derivatives,

d (fg) (vp) = (∂v (fg)) (p) = (∂vf · g + f∂vg) (p)

= g (p) df (vp) + f (p) dg (vp) .

2. For the second we use Proposition 8.17 and the product rule for partialderivatives to find,

d (fg) =

n∑j=1

∂j (fg) dxj =

n∑j=1

[∂jf · g + f∂jg] dxj

= g

n∑j=1

∂jfdxj + f

n∑j=1

∂jgdxj = gdf + fdg.

Example 8.19 (Example 8.14 revisited). Let f (x1, x2) = x1x22 be as in Example

8.14, then using the product rule,

df = x22dx1 + x1d

[x2

2

]= x2

2dx1 + 2x1x2dx2.

Proposition 8.20. If f ∈ Ω0 (U) and σ (t) is curve in U so that σ (0) exists,then

d

dt|0f (σ (t)) = df

(σ (0)σ(0)

).

Proof. By the chain rule and the definition of df (vp) ,

d

dt|0f (σ (t)) = f ′ (σ (0)) σ (0) =

(∂σ(t)f

)(σ (0)) = df

(σ (0)σ(0)

).

Exercise 8.6. Let g1, g2, . . . , gn ∈ C1 (U,R) , f ∈ C1 (Rn,R) , and u =f (g1, . . . , gn) , i.e.

u (p) = f (g1 (p) , . . . , gn (p)) for all p ∈ U.

Show

du =

n∑j=1

(∂jf) (g1, . . . , gn) dgj

which is to be interpreted to mean,

du (vp) =

n∑j=1

(∂jf) (g1 (p) , . . . , gn (p)) dgj (vp) for all vp ∈ TU.

Hint: For vp ∈ TU, let σ (t) = (g1 (p+ tv) , . . . , gn (p+ tv)) and then make useof the chain rule (see Eq. (8.1)) to compute du (vp) .

Here is yet one more version of the chain rule. [This next version essentiallyencompasses all of the previous versions.]

Exercise 8.7 (Chain Rule for Maps). Suppose that f : U → V and g :V →W are C1-functions where U, V, and W are open subsets of Rn, Rm, andRp respectively and let g f : U →W be the composition map,

g f : Uf−→ V

g−→W.

Show(g f)

′(p) = g′ (f (p)) f ′ (p) for all p ∈ U. (8.3)

Hint: Let v ∈ Rn and σ (t) := f (p+ tv) – a differentiable curve in V. Then usethe chain rule in Theorem 8.6 twice in order to compute,

(g f)′(p) v =

d

dt|0g (f (p+ tv)) =

d

dt|0g (σ (t)) .

Page: 46 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 53: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

8.3 Differential Forms 47

We now want to define a derivative map which fully keeps track of the basepoints, unlike df which forgets the target base point.

Definition 8.21 ( f∗ : TU → TV ). If U ⊂o Rn and V ⊂o Rm and f : U → Vis a smooth function, i.e. f : U → Rm is smooth with f (U) ⊂ V, then we definea map, f∗ : TU → TV by

f∗vp := [(∂vf) (p)]f(p) = [f ′ (p) v]f(p) for all (p, v) ∈ TU = U × Rn. (8.4)

We further let f∗p denote the restriction of f∗ to TpU in which case f∗p : TpU →Tf(p)V which is seen to be linear by the formula in Eq. (8.4).

Fig. 8.1. Describing the differential in geometric context.

Proposition 8.22 (Chain rule again). Let f and g be as in Exercises 8.7.Here are are last two reformulations of the chain rule.

1. If σ (t) is a curve in U such that σ (0) = v and σ (0) = p, then

f∗vp = f∗

(σ (0)σ(0)

)=

[d

dt|0f (σ (t))

]f(σ(0))

.

2. The chain rule in Eq. (8.3) may be written in the following pleasing form,

(g f)∗ = g∗f∗.

Proof. We take each item in turn.

1. Let vp ∈ TU. By the chain rule,

d

dt|0f (σ (t)) = f ′ (σ (0)) σ (0) = f ′ (p) v

Fig. 8.2. The chain rule in pictures.

and therefore, [d

dt|0f (σ (t))

]f(σ(0))

= [f ′ (p) v]p =: f∗vp.

2. By the chain rule in Exercise 8.7,

(g f)∗ vp =[(g f)

′(p) v

](gf)(p)

= [g′ (f (p)) f ′ (p) v]g(f(p)) .

On the other hand,

g∗f∗vp = g∗

([f ′ (p) v]f(p)

)= [g′ (f (p)) f ′ (p) v]g(f(p))

and hence (g f)∗ vp = g∗f∗vp for all vp ∈ TU, i.e. (g f)∗ = g∗f∗.

8.3 Differential Forms

Standing notation: throughout this section, let eini=1 be the standard basison Rn, εini=1 be its dual basis, xini=1 be the standard coordinate functions

on Rn (so xi (v) = εi (v) = vi for all v = (v1, . . . , vn)tr ∈ Rn) and U be an open

subset of Rn.

Definition 8.23 (Differential k-form). A 0-form on U is just a function,f : U → R while (for k ∈ N) a differential k-form (ω) on U is an assignment;

U 3 p→ ωp ∈ Λk([TpRn]

∗)for all p ∈ U.

Page: 47 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 54: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

48 8 Derivatives, Tangent Spaces, and Differential Forms

The form ω is said to be Cr if for every fixed v1, . . . , vk ∈ Rn the function,

U 3 p→ ωp

([v1]p , . . . , [vk]p

)∈ R

is a Cr function.

In order to simplify notation, I will usually just write

ω(

[v1]p , . . . , [vk]p

)for ωp

([v1]p , . . . , [vk]p

).

Notation 8.24 For an open subset, U ⊂ Rn and k ∈ [n] , we let Ωk (U) denotethe collection of C∞ (smooth) k-forms on U.

Example 8.25. If fiki=0 are smooth functions on U, then ω = f0df1 ∧ · · · ∧ dfkdefined by

ω(

[v1]p , . . . , [vk]p

)= f0 (p)0 df1 ∧ · · · ∧ dfk

([v1]p , . . . , [vk]p

)= f0 (p) det

[dfi

([vj ]p

)ki,j=1

]= f0 (p) det

[(∂vjfi

)(p)ki,j=1

]If f1 = xl1 , . . . , fk = xlk for some 1 ≤ l1 < l2 < · · · < lk ≤ n, then

ω(

[v1]p , . . . , [vk]p

)= f0 (p) det

[εli (vj)ki,j=1

]= f0 (p) εl1 ∧ · · · ∧ εlk (v1, . . . , vk)

Lemma 8.26. There is a one to one correspondence between k-forms (ω) on Uand functions ω : U → Λk

([Rn]

∗). The correspondence is determined by;

ω (p) (v1, . . . , vk) = ωp

([v1]p , . . . , [vk]p

)for all p ∈ U and viki=1 ⊂ R

n.

Under this correspondence, ω is a Cr k-form iff ω : U → Λk([Rn]

∗)is a Cr-

function.

Definition 8.27 (Multiplication Rules). If α ∈ Ωk (U) and β ∈ Ωl (U) , wedefine α ∧ β ∈ Ωk+l (U) by requiring

[α ∧ β]p = αp ∧ βp ∈ Λk+l([TpRn]

∗)for all p ∈ U.

If α = f ∈ Ω0 (U) , then the above formula is to be interpreted as

[fβ]p = f (p)βb ∈ Λl([TpRn]

∗)for all p ∈ U.

Remark 8.28. Using the identification in Lemma 8.26, these multiplication rulesare equivalent to requiring

α ∧ β (p) = α (p) ∧ β (b) for all p ∈ U.

Notation 8.29 For

J = 1 ≤ j1 < j2 < · · · < jk ≤ n ⊂ [n] , (8.5)

let

dxJ := dxj1 ∧ · · · ∧ dxjk ∈ Ωk (Rn) and

εJ = εj1 ∧ · · · ∧ εjk ∈ Λk([Rn]

∗)Proposition 8.30. If ω is a k-form on U , there exist unique functions ωJ :U → R such that

ω =∑

J⊂[n]:|J|=k

ωJdxJ , (8.6)

and all possible functions ωJ : U → R may occur. Moreover, if J ⊂ [n] as inEq. (8.5), then ωJ is related to ω by

ωJ (p) := ω(

[ej1 ]p , . . . , [ejk ]p

)= ω (p) (ej1 , . . . , ejk) for all p ∈ U.

Corollary 8.31. If ω is given as in Eq. (8.6) then

ω (p) =∑

J⊂[n]:|J|=k

ωJ (p) εJ

and ω is smooth iff the functions ωJ are smooth for each J ⊂ [n] with |J | = k.

Example 8.32. If ω ∈ Ω2 (U) , then

ω =∑

1≤i<j≤n

ωijdxi ∧ dxj and ω =∑

1≤i<j≤n

ωijεi ∧ εj

for some functions ωij ∈ Ω0 (U) .

The following lemma is a direct consequence of our development of themulti-linear algebra in the previous part.

Lemma 8.33. If αiki=1 ⊂ Ω1 (U) , then α1 ∧ · · · ∧αk ∈ Ωk (U) and moreover

ifvipki=1⊂ TpU, then

α1 ∧ · · · ∧ αk(v1p, . . . , v

kp

)= det

[αi(vjp)ki,j=1

]Page: 48 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 55: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

8.4 Vector-Fields and Interior Products 49

where

[αi(vjp)ki,j=1

]=

α1 (v1) α1 (v2) . . . α1 (vk)α2 (v1) α2 (v2) . . . α2 (vk)

...... . . .

...αk (v1) αk (v2) . . . αk (vk)

or its transpose if you prefer.

Remark 8.34 (Book Exercise 2.3.iii). Here is some help on Exercise 8.34 in thebook which asks you to show the following. Suppose U is an open subset of Rnand fj ∈ C∞ (U) for each j ∈ [n] . Let F (p) = (f1 (p) , . . . , fn (p))

tr, show

df1 ∧ · · · ∧ dfn = detF ′ · dx1 ∧ · · · ∧ dxn.

Well by Proposition 8.30, we know that

df1 ∧ · · · ∧ dfn = ω · dx1 ∧ · · · ∧ dxn

where

ωp = df1 ∧ · · · ∧ dfn(

[e1]p , . . . , [en]p

)

= det

df1

([e1]p

)df1

([e2]p

). . . df1

([en]p

)df2

([e1]p

)df2

([e2]p

). . . df2

([en]p

)...

.... . .

...

dfn

([e1]p

)dfn

([e2]p

). . . dfn

([en]p

)

= det [F ′ (p)] .

Example 8.35. If

α = f0df1 ∧ · · · ∧ dfk ∈ Ωk (U) and

β = g0dg1 ∧ · · · ∧ dgl ∈ Ωl (U)

for some functions fjkj=0 ∪ gili=0 ⊂ Ω0 (U), then

α ∧ β = f0g0df1 ∧ · · · ∧ dfk ∧ dg1 ∧ · · · ∧ dgl ∈ Ωk+l (U) .

Exercise 8.8. Suppose that xj4j=1 are the standard coordinates on R4, p =

(1,−1, 2, 3)tr ∈ R4, v1 = (1, 2, 3, 4)

tr, v2 = (0, 1,−1, 1)

tr, v3 = (1, 0, 3, 2) ,

α = x4 (dx1 + dx2) , β = x1x2 (dx3 + dx4) , and ω =(x2

1 + x23

)dx3∧dx2∧dx4.

Compute the following quantities;

1. α(v1p

),

2. α ∧ α(v1p, v

2p

),

3. α ∧ β(v1p, v

2p

),

4. ω(v1p, v

2p, v

3p

).

Exercise 8.9. Let xi6i=1 be the standard coordinates on R6 and let

ω = dx1 ∧ dx2 + dx3 ∧ dx4 + dx5 ∧ dx6 ∈ Ω2(R6).

Showω ∧ ω ∧ ω = cdx1 ∧ dx2 ∧ dx3 ∧ dx4 ∧ dx5 ∧ dx6,

for some c ∈ R which you should find.

8.4 Vector-Fields and Interior Products

Definition 8.36. A vector field on U ⊂o Rn, is an assignment to each p ∈U to and element F (p) ∈ TpU. Necessarily, this means there exists a unique

function, f = (f1, . . . , fn)tr

: U → Rn, such that F (p) = [f (p)]p for all p ∈ U.We say F is smooth if f ∈ C∞ (U,Rn) . To simplify notation, we will oftensimply identify f with F.

Definition 8.37 (Interior Product). For ω ∈ Ωk (U) and vp ∈ TpM, let

ivpωp := ωp (vp, . . . )

be the interior product of vp with ωp ∈ Λk(T ∗pU

)as in Definition 7.11. If F

is a vector field as in Definition 8.36 we let iFω ∈ Ωk−1 (U) be defined by

[iFω]p = iF (p)ωp = if(p)pωp.

[We will abuse notation and often just (improperly) write ifω for iFω.]

Example 8.38. If ω = g0dg1 ∧ · · · ∧ dgk, then from Lemma 7.12,

iFω = g0

k∑j=1

(−1)j−1

dgj (Fj) dg1 ∧ . . . dgj ∧ · · · ∧ dgk

= g0

k∑j=1

(−1)j−1

(∂fgj) dg1 ∧ . . . dgj ∧ · · · ∧ dgk.

If g0 = 1 and gj = xj for 1 ≤ j ≤ k, then dxj (F ) = fj and the above formulabecomes,

iF (dx1 ∧ · · · ∧ dxk) =

k∑j=1

(−1)j−1

fjdx1 ∧ · · · ∧ dxj ∧ · · · ∧ dxk. (8.7)

Page: 49 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 56: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

50 8 Derivatives, Tangent Spaces, and Differential Forms

8.5 Pull Backs

Definition 8.39 (Pull-Back). Suppose that V ⊂o Rm and U ⊂o Rn and ϕ :V → U is a smooth function. Then for ω ∈ Ωp (U) we define ϕ∗ω ∈ Ωp (V ) by

(ϕ∗ω) (v1, . . . , vp) = ω (ϕ∗v1, . . . , ϕ∗vp) .

Lemma 8.40. If f ∈ Ω0 (V ) and ω = df ∈ Ω1 (V ) , then

ϕ∗df = d [ϕ∗f ] = d (f ϕ) . (8.8)

Proof. For vp ∈ TpV, let σ (t) = ϕ (p+ tv) and use the chain rule to find,

d

dt|0f (ϕ (p+ tv)) =

d

dt|0f (σ (t)) = df

(σ (0)σ(0)

)= df (ϕ∗vp) .

Therefore,

(ϕ∗df) (vp) = df (ϕ∗vp) =d

dt|0f (ϕ (p+ tv))

=d

dt|0 [f ϕ (p+ tv)] = d (f ϕ) (vp) = d [ϕ∗f ] (vp) .

Proposition 8.41. If ω ∈ Ωk (U) , η ∈ Ωl (U) , and ϕ and ψ are maps suchthat ψ ϕ makes sense, then

ϕ∗ψ∗ω = (ψ ϕ)∗ω (8.9)

andϕ∗ (ω ∧ η) = ϕ∗ω ∧ ϕ∗η. (8.10)

Proof. The first identity follows from Exercise 6.2 and the second fromTheorem 7.10.

Corollary 8.42. Suppose that V ⊂o Rm, U ⊂o Rn, ϕ : V → U is a smoothfunction, gj ∈ C∞ (U) for 0 ≤ j ≤ k. Then

ϕ∗ [g0dg1 ∧ · · · ∧ dgk] = g0 ϕ · d [g1 ϕ] ∧ · · · ∧ d [gk ϕ] . (8.11)

Proof. Let α = g0dg1 ∧ · · · ∧ dgk.First proof. Using Eq. (8.10) it follows that

ϕ∗α = ϕ∗ (g0dg1 ∧ · · · ∧ dgk) = ϕ∗g0 [ϕ∗dg1 ∧ · · · ∧ ϕ∗dgk] .

This result along with Lemma 8.40 completes the proof of Eq. (8.11).

Second proof. Letviki=1⊂ Rm and q ∈ V, then

(ϕ∗α)(v1q , . . . , v

kq

)= αϕ(q)

(ϕ∗v

1q , . . . , ϕ∗v

kq

)= g0 (ϕ (q)) dg1 ∧ · · · ∧ dgk

(ϕ∗v

1q , . . . , ϕ∗v

kq

)= g0 (ϕ (q)) · det

[dgi(ϕ∗v

jq

)ki,j=1

].

But finally we have by Lemma 8.40, that

dgi(ϕ∗v

jq

)= (ϕ∗dgi)

(vjq)

= (d [gi ϕ])(vjq)

and so

det[dgi(ϕ∗v

jq

)ki,j=1

]= (d [g1 ϕ] ∧ · · · ∧ d [gk ϕ])

(v1q , . . . , v

kq

)and hence

(ϕ∗α)(v1q , . . . , v

kq

)= (g0 ϕ) (q) · (d [g1 ϕ] ∧ · · · ∧ d [gk ϕ])

(v1q , . . . , v

kq

)which again proves Eq. (8.11).

Example 8.43. Suppose that f : R3 → R2 is given by f (x1, x2, x3) =(x2

1ex2 , x1x3

)and ω = xdy and α = cos (xy) dx ∧ dy as forms on R2 where

(x, y) are the standard coordinates on R2. Here are the solutions;

f∗ω = x f · d [y f ] = x21ex2 · d [x1x3] = x2

1ex2 · (x3dx1 + x1dx3)

and

f∗α = cos(x2

1ex2x1x3

) [d(x2

1ex2)]∧ [d (x1x3)]

= cos(x3

1x3ex2)ex2 [2x1dx1 + dx2] ∧ (x3dx1 + x1dx3)

= cos(x3

1x3ex2)ex2[2x2

1dx1 ∧ dx3 − x3dx1 ∧ dx2 + x1dx2 ∧ dx3

].

Basically in this case we need only let “x = x21ex2” and y = x1x3 and then

follows our nose in computing ω = xdy and α = cos (xy) dx ∧ dy.

Example 8.44. Let ω = udv where u (x, y) = sin (x+ y) and v (x, y) = exy

and suppose again that f (x1, x2, x3) =(x2

1ex2 , x1x3

). Again the rule is to let

x = x21ex2 and y = x1x3 and then compute

f∗ω = sin(x2

1ex2 + x1x3

)· d exp

(x2

1ex2 · x1x3

)= sin

(x2

1ex2 + x1x3

)· d exp

(x3

1x3ex2)

= sin(x2

1ex2 + x1x3

)· exp

(x3

1x3ex2)d(x3

1x3ex2)

= sin(x2

1ex2 + x1x3

)· exp

(x3

1x3ex2) (

3x21x3e

x2dx1 + x31ex2dx3 + x3

1x3ex2dx2

)= sin

(x2

1ex2 + x1x3

)· exp

(x3

1x3ex2)x2

1ex2 (3xdx1 + x1x3dx2 + x1e

x2dx3) .

Page: 50 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 57: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

8.6 Exterior Differentiation 51

8.6 Exterior Differentiation

Now that we have defined forms it is natural to try to differentiate these forms.We have already differentiated 0-forms, f, to get a 1-form df. So it is naturalto generalize this definition as follows.

Definition 8.45 (Exterior Differentiation). If ω =∑J ωJdxJ ∈ Ωk (U) ,

we define

dω :=∑J

dωJ ∧ dxJ ∈ Ωk+1 (U) (8.12)

or equivalently,

dω =

n∑i=1

∑J

(∂iωJ) dxi ∧ dxJ .

It turns out in order to compute dω you only need to use the Propertiesof d explained in the next proposition. You may wish to skip the proof of thisproposition until after seeing examples of computing dω and doing the relatedexercises.

Proposition 8.46 (Properties of d). The exterior derivative d satisfies thefollowing properties;

1. df (vp) = (∂vf) (p) for f ∈ Ω0 (U) .2. d : Ωp (U)→ Ωp+1 (U) is a linear map for all 0 ≤ p < n.3. d satisfies the product rule

d [ω ∧ η] = dω ∧ η + (−1)pω ∧ dη

for all ω ∈ Ωp (U) and η ∈ Ωq (U) .4. d2ω = 0 for all ω ∈ Ωp (U) .

Suggestion: rather than reading the proof on your first pass, instead jumpto Lemma 8.47 and continue reading from there. Come back to the proofafter you have some experience with computing with d.

Proof. In terms of the identification of ω ∈ Ωp (U) withω ∈ C∞

(U,Λk

([Rn]

∗))in Lemma 8.26 we have

dω =

n∑i=1

∑J

(∂iωJ) εi ∧ εJ =

n∑i=1

εi ∧∑J

(∂iωJ) εJ

which may be written as

dω = dω :=

n∑i=1

εi ∧ ∂iω. (8.13)

This last equation describes dω without first expanding ω as a linear combi-nation of the dxJ . This turns out to be quite convenient for deducing thebasic properties of the exterior derivative stated in this proposition. To simplifynotation in this proof we will not distinguish between ω and ω and d and d andwe will exclusively (in this proof) view forms as function from U to Λk

([Rn]

∗).

We now go to the proof proper.The first item immediate from the linearity of the derivative operator. The

second item is consequence of the product rule for differentiation;

d [ω ∧ η] =

n∑j=1

εj ∧∂

∂xj[ω ∧ η] =

n∑j=1

εj ∧[∂ω

∂xj∧ η + ω ∧ ∂η

∂xj

]

=

n∑j=1

εj ∧∂ω

∂xj∧ η +

n∑j=1

εj ∧ ω ∧∂η

∂xj

= dω ∧ η + (−1)pω ∧

n∑j=1

εj ∧∂η

∂xj

= dω ∧ η + (−1)

pω ∧ dη.

Lastly,

d2ω =

n∑i=1

εi ∧∂

∂xi

n∑j=1

εj ∧∂ω

∂xj

=

n∑i,j=1

εi ∧ εj ∧∂2ω

∂xi∂xj

=1

2

n∑i,j=1

[εi ∧ εj ∧

∂2ω

∂xi∂xj+ εj ∧ εi ∧

∂2ω

∂xj∂xi

]

=1

2

n∑i,j=1

[εi ∧ εj ∧

∂2ω

∂xi∂xj− εi ∧ εj ∧

∂2ω

∂xi∂xj

]= 0,

wherein we have used the fact that mixed partial derivatives of C2-functions(vector-valued or not) are equal.

Lemma 8.47. If gjpj=0 ⊂ Ω0 (U) , then

d [g0 · g1 ∧ · · · ∧ dgp] = dg0 ∧ dg1 ∧ · · · ∧ dgp. (8.14)

This formula along with the knowing df for f ∈ Ω0 (U) completely determinesd on Ωp (U) .

Page: 51 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 58: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

52 8 Derivatives, Tangent Spaces, and Differential Forms

Proof. The proof is by induction on p. Rather than do the general inductionargument, let me explain the case p = 3 in detail so that ω = g0dg1 ∧ dg2 ∧ dg3.Then using only the properties developed in Proposition 8.46,

dω = dg0 ∧ [dg1 ∧ dg2 ∧ dg3] + g0d [dg1 ∧ dg2 ∧ dg3]

where

d [dg1 ∧ dg2 ∧ dg3] = d [dg1 ∧ (dg2 ∧ dg3)]

= d2g1 ∧ (dg2 ∧ dg3)− dg1 ∧ d (dg2 ∧ dg3)

= 0− dg1 ∧[d2g2 ∧ dg3 − dg2 ∧ d2g3

]= 0.

Thus we have showndω = dg0 ∧ dg1 ∧ dg2 ∧ dg3

as desired.The next corollary shows that the properties in Proposition 8.46 actually

uniquely determines the exterior derivative, d.

Corollary 8.48. If d : Ω∗ (U)→ Ω∗+1 (U) is any linear operator satisfying thefour properties in Proposition 8.46, then d is in fact given as in Definition 8.45.

Proof. Let ω =∑J ωJdxJ ∈ Ωk (U) where the sum is over J ⊂ [n] with

|J | = k. By Lemma 8.47, which was proved using only the properties in Propo-sition 8.46, we know that

d [ωJdxJ ] = d [ωJdxj1 ∧ · · · ∧ dxjk ] = dωJ ∧ dxj1 ∧ · · · ∧ dxjk= dωJ ∧ dxJ .

Thus using the assumed linearity of d, it follows that

dω =∑J

dωJ ∧ dxJ

in agreement with the definition in Eq. (8.12).

Example 8.49. In this example, let x, y, z be the standard coordinates on R3

(actually any smooth function on R3 or Rk for that matter would work). If

α = xdy − ydx+ zdz,

thendα = dx ∧ dy − dy ∧ dx+ dz ∧ dz = 2dx ∧ dy.

If β = ex+y2+z3dx ∧ dy, then

dex+y2+z3 = ex+y2+z3 · d(x+ y2 + z3

)= ex+y2+z3 ·

(dx+ 2ydy + 3z2dz

)and therefore,

dβ = dex+y2+z3 ∧ dx ∧ dy = ex+y2+z3 ·(dx+ 2ydy + 3z2dz

)∧ dx ∧ dy

= ex+y2+z3 · 3z2dz ∧ dx ∧ dy = ex+y2+z3 · 3z2dx ∧ dy ∧ dz.

Definition 8.50. A form ω ∈ Ωk (U) is closed if dω = 0 and it is exact ifω = dµ for some µ ∈ Ωk−1 (U) .

Note that if ω = dµ, then dω = d2µ = 0, so exact forms are closed but theconverse is not always true.

Example 8.51. In this example, again x, y, z be the standard coordinates on R3

(actually any smooth function on R3 or Rk for that matter would work). If

α = ydx+ (z cos yz + x) dy + y cos yzdz

then

dα = dy∧dx+((cos yz − yz sin yz) dz + dx)∧dy+(cos yz − zy sin yz) dy∧dz = 0,

i.e. α is closed, see Definition 8.50.

Exercise 8.10. Let α = xdx − ydy, β = zdx ∧ dy + xdy ∧ dz and γ = zdy onR3, calculate,

α ∧ β, α ∧ β ∧ γ, dα, dβ, dγ.

Exercise 8.11. Let (x, y) be the standard coordinates on R2, and define,

α :=(x2 + y2

)−1 · (xdy − ydx) ∈ Ω1(R2 \ 0

).

Show α is closed. [We will eventually see that this form is not exact.]

Exercise 8.12 (Divergence Formula). Let f = (f1, f2, f3, . . . , fn) and ω =dx1 ∧ · · · ∧ dxn. By Example 8.38 with k = n we have

ifω = iFω =

n∑j=1

(−1)j−1

fjdx1 ∧ · · · ∧ dxj ∧ · · · ∧ dxn.

Show

d [iFω] = (∇ · f)ω where ∇ · f =

n∑i=1

∂ifi,

i.e. ∇ · f is the divergence of f from your vector calculus course.

Page: 52 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 59: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Exercise 8.13 (Curl Formula). Let f = (f1, f2, f3) ∈ C∞(R3,R3

),

ω = dx1 ∧ dx2 ∧ dx3, and

α = f · (dx1, dx2, dx3) := f1dx1 + f2dx2 + f3dx3.

Show dα = i∇×fω where ∇ × f is the usual vector calculus curl of f , see Eq.(7.12) of Definition 7.19 with F replaced by f = (f1, f2, f3) .

Theorem 8.52 (d commutes with ϕ∗). Suppose that V ⊂o Rm and U ⊂o Rnand ϕ : V → U is a smooth function. Then d commutes with the pull-back, ϕ∗.In more detail, if 0 ≤ p ≤ m and α ∈ Ωp(V ) then d(ϕ∗α) = ϕ∗(dα).

Proof. We may assume that α = g0dg1 ∧ · · · ∧ dgp in which case

ϕ∗α = ϕ∗ (g0dg1 ∧ · · · ∧ dgp)= ϕ∗g0 [d (ϕ∗g1) ∧ · · · ∧ d (ϕ∗gp)]

and sodϕ∗α = d (ϕ∗g0) ∧ d (ϕ∗g1) ∧ · · · ∧ d (ϕ∗gp)

while from Lemma 8.47,

ϕ∗dα = ϕ∗dg0 ∧ ϕ∗dg1 ∧ · · · ∧ ϕ∗dgp= d (ϕ∗g0) ∧ d (ϕ∗g1) ∧ · · · ∧ d (ϕ∗gp) = d [ϕ∗α] .

Page 60: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 61: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

9

An Introduction of Integration of Forms

One of the main point of differential k-forms is that they may be integratedover k-dimensional manifolds. Although we are not going to define the notationof a manifold at this time, please do have a look at Chapter 6 starting on page75 of Reyer Sjamaar’s notes: Manifolds and Differential Forms, for the notion ofa manifold and associated tangent spaces along with lots of pictures! (Picturesis one thing in short supply in our book.)

9.1 Integration of Forms Over “Parameterized Surfaces”

Definition 9.1 (Basic integral). If D ⊂o Rk and α = fdx1 ∧ · · · ∧ dxk ∈Ωk (D) , we define ∫

D

α :=

∫D

fdm

provided the latter integral makes sense, i.e. provided∫D|f | dm < ∞. Often

times we will guarantee this to be the case by assuming f ∈ C∞c (D) .

We now want elaborate on this basic integral.

Definition 9.2. Let D ⊂o Rk and U ⊂o Rn. We say a smooth function, γ :D → U, is a parameterized k-surface in U.

Definition 9.3. If γ : D → U is a parameterized k-surface in U and ω ∈Ωk (U) is a k-form, then we define,∫

γ

ω :=

∫D

γ∗ω.

Example 9.4 (Line Integrals). Suppose that ω =∑nj=1 fjdxj ∈ Ω1 (U) and γ =

(γ1, . . . , γn)tr

: [a, b] → U is a smooth curve, then letting t be the standardcoordinate on R (i.e. t (a) = a for all a ∈ R) we find,

γ∗f =

n∑j=1

fj γ (t) d (xj γ (t)) =

n∑j=1

fj (γ (t)) d (γj (t))

=

n∑j=1

fj (γ (t)) γj (t) dt

and hence ∫γ

f =

∫[a,b]

n∑j=1

fj (γ (t)) γj (t) dt =

∫ b

a

f (γ (t)) · γ (t) dt

where f = (f1, . . . , fn)tr

thought of as a vector field on U.

Example 9.5 (Integrals over surfaces). Suppose that D = (−1, 1)2 ⊂ R2, U =

R3, and γ (x, y) =(x, y, 2− x2 − y2

)as in Figure 9.1 and let

γ∗ω = fdx ∧ dy.

In this picture we have divided the base up into little square and then found

Fig. 9.1. This is the plot of graph of γ (x, y) =(x, y, 2− x2 − y2

)over D.

their images under γ. It is reasonable to assign a contribution to∫γω from a

little base square, Qj := pj + ε [0, 1]2

to be approximately,

ω(γ∗

([εe1]pj

), γ∗

([εe2]pj

))= (γ∗ω)

([εe1]pj , [εe2]pj

)= f (pj) ε

2 = f (pj) ·Area (Qj)

Page 62: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

56 9 An Introduction of Integration of Forms

and therefore we should have∫γ

ω ∼=∑j

f (pj) ·Area (Qj)→∫D

fdm as ε ↓ 0.

Theorem 9.6 (Stoke’s Theorem II). Suppose that D = H =(p1, . . . , pn) ∈ Rn : p1 ≤ 0 is the “left half space”, U ⊂o Rm, γ : D → U isa parameterized n-surface1, and µ ∈ Ωn−1 (U) , then assuming that γ∗µ is therestriction of a smooth compactly supported n− 1-form on Rn−1, we have∫

γ

dµ =

∫∂γ

µ

where ∂γ : Rn−1 → U is defined by

∂γ (t1, . . . , tn−1) = γ (0, t1, . . . , tn−1) .

Rn−1

0 e1D = H γ

∂γ

Fig. 9.2. Half space

Proof. If, as in book Exercise 3.2viii (our first version of Stoke’s theorem),we let i : Rn−1 → Rn be the inclusion map,

ι (t1, . . . , tn−1) := (0, t1, . . . , tn−1) ,

then ∂γ := γ ι : Rn−1 → U. Therefore, using pull-backs commute with d, thedefinitions of integration we have given along with your book Exercise 3.2viii,we find, ∫

γ

dµ :=

∫Hγ∗dµ =

∫Hd [γ∗µ] =

∫Rn−1

ι∗ [γ∗µ]

=

∫Rn−1

(γ ι)∗ µ =

∫Rn−1

(∂γ)∗µ =:

∫∂γ

µ.

1 We assume γ extends to a smooth function into U on an open neighborhood of H.

9.2 The Goal (Degree Theorem):

In the end of the day we would really like to define an integral of the form,∫γ(D)

ω, by which we mean we want the integral to depend only on the image of

γ and not on the particular choice of parametrization of this image. For exampleof f : D′ → D is a diffeomorphism, so that γ (D) = γ f (D′) , we are goingto want, ∫

D

γ∗ω =

∫γ

ω =

∫γf

ω =

∫D′

(γ f)∗ω =

∫D′f∗ (γ∗ω) .

In other words we would like to show if f : D′ → D is a diffeomorphism thenthe following change of variable theorem hold,∫

D

α =

∫D′f∗α for all α ∈ Ωkc (D) . (9.1)

This last assertion will actually only be true up to sign ambiguity when Dis connected and we will have to take care of this sign ambiguity later byintroducing the notion of an orientation. Nevertheless, the next very importantstep in our development of integration of forms is to find how to relate

∫D′f∗α

to∫Dα. This will lead us to the deepest topic of this course, namely degree

theory and the change of variables theorem.

Definition 9.7 (Compact support). If V is an open subset of Rn we letΩkc (V ) denote the “ compactly supported” k-forms on V. This means there isa closed and bounded subset, K ⊂ V, such that ωp = 0 if p /∈ K.

Definition 9.8. Let U and V be connected open subsets of Rn and supposethat f : U → V is a diffeomorphism. Then f is orientation preserving ifdet f ′ > 0 and orientation reversing if det f ′ < 0. [Because U is connectedand det f ′ is never 0, it follows that det f ′ is either always positive or alwaysnegative.]

Here is the statement of the theorem2 we are heading to prove.

Theorem 9.9. Let U and V be connected open subsets of Rn and f : U → Vbe a smooth proper3 map. Then there exists an integer, deg (f) ∈ Z, such that∫

U

f∗ω = deg (f) ·∫V

ω for all ω ∈ Ωnc (V ) .

The degree function has the following properties;

2 This theorem is probably is the most important and deepest theorem of this course.3 If U and V are bounded open sets then ϕ : U → V is proper if ϕ (p) is near the

boundary of V when p is near the boundary of U. The general definition of properwill be given later.

Page: 56 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 63: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

9.2 The Goal (Degree Theorem): 57

1. deg (f g) = deg (f) · deg (g) ,2. deg (f) = 1 (−1) if f is an orientation preserving (reversing) diffeomor-

phism,3. deg (f) = 0 if f (U) & V,4. if q ∈ V is a regular value4 of f, then

deg (f) =∑

p∈f−1(q)

sgn (det f ′ (q)) .

Example 9.10. If U = (a, b) and V = (c, d) are bounded open intervals in R andf : U → V is a smooth map. There are four possible ways for f to be a propermap, see Corollary 10.49 below. In what follows we write f (a+) := limx↓a f (x)and f (b−) := limx↑b f (x) .

1. f (a+) = f (b−) = c.2. f (a+) = f (b−) = d.3. f (a+) = c and f (b−) = d4. f (a+) = d and f (b−) = c.

In the first two case deg (f) = 0 while in case deg (f) = 1 and deg (f) = −1in case 4, see Figure 9.3 below.

Exercise 9.1. Let f : R2 → R2 be the smooth map,

f (x, y) =(x2 − y2, 2xy

).

Show ‖f (x, y)‖ → ∞ when ‖(x, y)‖ → ∞ which turns out to be equivalent tothe statement the f is proper, see Example 10.43 below. Further compute thedeg (f) .

We are going to spend a fair bit of time understanding the hypothesis andproving Theorem 9.9. To understand the hypothesis we need to discuss somebasic topological notions, which we do in the next chapter. We will also need toconstruct lots of smooth compactly supported functions on open subsets of Rn.The next section is the grist mill for constructing smooth compactly supportedfunctions.

4 A point q ∈ V is a regular value of ϕ if for all p ∈ ϕ−1 (q) , detϕ′ (p) 6= 0.

a b

c

d

a b

c

d

a b

c

d

Fig. 9.3. Here are three examples of proper map one dimensional maps with theirdegrees which are 0, −1, and 1 respectively. Let us also note in the first case, the mapis not surjective which always leads to a degree zero map. In the one dimensional case,the degree can only take the values 0, 1,−1 .

Page: 57 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 64: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 65: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10

Elements of Point Set (Metric) Topology

Definition 10.1 (Euclidean metric). Let X be a subset of Rn and for p, q ∈X, let

d (p, q) := ‖p− q‖ =

√√√√ n∑j=1

(pj − qj)2(10.1)

be the Euclidean distance between p and q.

Remark 10.2. The function d has the following basic properties for all p, q, r ∈ X

1. d (p, q) ≥ 0 with d (p, q) = 0 iff p = q,2. d (p, q) = d (q, p) and3. d (p, q) ≤ d (p, r) + d (r, q).

Definition 10.3 (Metric Space). A metric space is a set X equipped with afunction, d : X × X → [0,∞), satisfying the three properties in Remark 10.2above.

Example 10.4. There are many examples of metric spaces. For example, onemight let X be the points on the surface of the earth and for p, q ∈ X, let d (p, q)be the geodesic distance between p and q. This distance function generalizes togeneral “Riemannian manifolds.”

Although we are likely to only use metric spaces of the form in Definition10.1, I will give many of the basic definitions and properties for general metricspaces as it require no extra work. You are free to always assume that X is asubset of Rn and d (p, q) is given by Eq. (10.1) if you prefer.

Definition 10.5 (Limits of sequences). A sequence xn∞n=1 in a metricspace (X, d) is said to be convergent if there exists a point x ∈ X such thatlimn→∞ d(x, xn) = 0. In this case we write limn→∞ xn = x or xn → x asn→∞.

Exercise 10.1. Show that x in Definition 10.5 is necessarily unique by showingd (x, y) = 0 if xn → x and xn → y as n→∞.

10.1 Continuity

Definition 10.6 (Continuity). Let (X, d) and (Y, dY ) be two metric spaces.A function f : X → Y is continuous at x ∈ X if

limn→∞

f (xn) = f (x) for all xn∞n=1 ⊂ X with limn→∞

xn = x.

We say f is continuous on X if it is continuous at all points in X which maybe stated as

limn→∞

f (xn) = f(

limn→∞

xn

)whenever lim

n→∞xn exists in X.

Definition 10.7. A function f : (X, d)→ (Y, dY ) is said to be Lipschitz (con-tinuous) if there exists K <∞ such that

dY (f (x) , f (x′)) ≤ Kd (x, x′) for all x, x′ ∈ X. (10.2)

[Note f is continuous since if xn → x then

dY (f (xn) , f (x)) ≤ Kd (xn, x)→ 0 as n→∞.]

If f satisfies Eq. (10.2) we will say that f is Lip−K continuous.

Page 66: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

60 10 Elements of Point Set (Metric) Topology

Example 10.8. Any function, f : R→ R which is everywhere differentiable isLipschitz iff K := supt∈R |f ′ (t)| <∞. Indeed if

|f (y)− f (x)| ≤ K |y − x| for all x, y ∈ R

then

|f ′ (x)| = limy→x

|f (y)− f (x)||y − x|

≤ K for all x ∈ R.

Conversely, if K := supt∈R |f ′ (t)| < ∞, then by the mean value theorem, forall y > x there exists c ∈ (x, y) such that∣∣∣∣f (y)− f (x)

y − x

∣∣∣∣ = |f ′ (c)| ≤ K.

It turns out that every metric spaces with an infinite number of elementscomes equipped with a large collection of Lipschitz functions.

Lemma 10.9 (Distance to a Set). For any non empty subset A ⊂ X, let

dA (x) := infd(x, a)|a ∈ A,

then|dA (x)− dA(y)| ≤ d(x, y) ∀x, y ∈ X, (10.3)

i.e. dA : X → [0,∞) is Lip−1 continuous.

Proof. Let a ∈ A and x, y ∈ X, then

dA (x) ≤ d(x, a) ≤ d(x, y) + d(y, a).

Take the infimum over a in the above equation shows that

dA (x) ≤ d(x, y) + dA(y) ∀x, y ∈ X.

Therefore, dA (x)− dA(y) ≤ d(x, y) and by interchanging x and y we also havethat dA(y)− dA (x) ≤ d(x, y) which implies Eq. (10.3).

Example 10.10. For fixed x ∈ X, the function f (y) = d (x, y) is Lip-1 continuousas is seen by taking A = x in Lemma 10.9.

Corollary 10.11 (Optional on first read). The function d satisfies,

|d(x, y)− d(x′, y′)| ≤ d(y, y′) + d(x, x′).

Therefore d : X × X → [0,∞) is continuous in the sense that d(x, y) is closeto d(x′, y′) if x is close to x′ and y is close to y′. In particular, if xn → x andyn → y then

limn→∞

d (xn, yn) = d (x, y) = d(

limn→∞

xn, limn→∞

yn

).

Proof. First Proof. By Lemma 10.9 for single point sets and the triangleinequality for the absolute value of real numbers,

|d(x, y)− d(x′, y′)| ≤ |d(x, y)− d(x, y′)|+ |d(x, y′)− d(x′, y′)|≤ d(y, y′) + d(x, x′).

Second Proof. By the triangle inequality,

d (x, y) ≤ d (x, x′) + d (x′, y) ≤ d (x, x′) + d (x′, y′) + d (y′, y)

from which it follows that

d (x, y)− d (x′, y′) ≤ d (x, x′) + d (y′, y) .

Interchanging x with x′ and y with y′ in this inequality shows,

d (x′, y′)− d (x, y) ≤ d (x, x′) + d (y′, y)

and the result follows from the last two inequalities.

Example 10.12. Let f : R→ R be the function defined by

f (x) =

1 if x ∈ Q0 if x /∈ Q.

The function f is discontinuous at all points in R. For example, if x0 ∈ Q wemay choose xn ∈ R \Q such that limn→∞ xn = x0 while

limn→∞

f (xn) = limn→∞

0 = 0 6= 1 = f (x0) .

Similarly if if x0 ∈ R \ Q we may choose xn ∈ Q such that limn→∞ xn = x0

whilelimn→∞

f (xn) = limn→∞

1 = 1 6= 0 = f (x0) .

Exercise 10.2. Consider N as a metric space with d (m,n) := |m− n| andsuppose that (Y, d) is a metric space. Show that every function, f : N → Y iscontinuous.

We will assume the reader is familiar with the basic properties of complexnumbers and in particular with the following theorem.

Theorem 10.13. If wn∞n=1 and zn∞n=1 are convergent sequences of complexnumbers, then

1. limn→∞ (wn + zn) = limn→∞ wn + limn→∞ zn.2. limn→∞ (wn · zn) = limn→∞ wn · limn→∞ zn.

Page: 60 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 67: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10.2 Closed and Open Sets 61

3. if we further assume that limn→∞ zn 6= 0, then

limn→∞

(wnzn

)=

limn→∞ wnlimn→∞ zn

.

Exercise 10.3. Suppose that (X, d) is a metric space and f, g : X → C are twocontinuous functions on X. Show;

1. f + g is continuous,2. f · g is continuous,3. f/g is continuous provided g (x) 6= 0 for all x ∈ X.

Example 10.14. The functions f : C \ 0 → C defined by f (z) = 1/z is contin-uous since g (z) = z is Lip-1 continuous as |g (z)− g (w)| = |z − w| , it followsthat f (z) = 1/g (z) is continuous by part 3. of Exercise 10.3.

Exercise 10.4. (You only need explain parts 1. and 5. of this problem as wehave done the other parts in class.) Show the following functions from C to Care continuous.

1. f (z) = c for all z ∈ C where c ∈ C is a constant.2. f (z) = |z| . [Hint: f (z) = d (z, 0) where d (z, w) := |z − w| is the Euclidean

norm on C ∼= R2.]3. f (z) = z and f (z) = z.4. f (z) = Re z and f (z) = Im z.

5. f (z) =∑Nm,n=0 am,nz

mzn where am,n ∈ C.

Exercise 10.5. Suppose now that (X, d), (Y, dY ), and (Z, dZ) are three metricspaces and f : X → Y and g : Y → Z. Let x ∈ X and y = f (x) ∈ Y, showg f : X → Z is continuous at x if f is continuous at x and g is continuousat y. Recall that (g f) (x) := g (f (x)) for all x ∈ X. In particular this impliesthat if f is continuous on X and g is continuous on Y then f g is continuouson X.

Example 10.15. If f : X → C is a continuous function then |f | is continuousand

F :=

N∑m,n=0

amnfm · fn

is continuous.

Definition 10.16. A map f : X → Y between topological spaces is called ahomeomorphism provided that f is bijective, f is continuous and f−1 : Y →X is continuous. If there exists f : X → Y which is a homeomorphism, we saythat X and Y are homeomorphic. (As topological spaces X and Y are essentiallythe same.)

10.2 Closed and Open Sets

Definition 10.17 (Closed Sets). A set F ⊂ X is closed iff every convergentsequence xn∞n=1 which is contained in F has its limit back in F. We will writeF @ X to indicate F is a closed subset of X.

Definition 10.18 (Open Sets). A set V ⊂ X is open iff V c is closed and wewrite V ⊂o X to indicate the V is an open subset of X.

Notation 10.19 To simplify notation in future arguments we will write;

1. zn ∈ A i.o. (read zn ∈ A infinitely often) to mean # n : zn ∈ A = ∞and

2. zn ∈ A a.a. (read zn ∈ A almost always) to mean # n : zn /∈ A < ∞.[Equivalently, zn ∈ A a.a. iff there exists N < ∞ such that zn ∈ A for alln ≥ N.]

Theorem 10.20. The closed subsets of (X, d) have the following properties;

1. X and ∅ are closed.2. If Cαα∈I is a collection of closed subsets of X, then ∩α∈ICα is closed inX.

3. If A and B are closed sets then A ∪B is closed.

Proof. 3. Let zn∞n=1 ⊂ A ∪ B such that limn→∞ zn =: z exists. Thenzn ∈ A i.o. or zn ∈ B i.o. For sake of definiteness say zn ∈ A i.o. in which casewe may choose a subsequence, wk := znk ∈ A for all k. Since limk→∞ wk = zand A is closed it follows that z ∈ A and hence z ∈ A∪B. Thus we have shownA ∪B is closed.

Exercise 10.6. Prove item 2. of Theorem 10.20. If Cαα∈I is a collection ofclosed subsets of X, then ∩α∈ICα is closed in X.

Example 10.21. Let X = R and −∞ < a < b <∞.

1. The sets, [a, b] , [a,∞) and (−∞, a] are all closed sets. For example ifxn∞n=1 ⊂ [a, b] and x = limn→∞ xn, then a ≤ xn ≤ b for all n andtherefore by the “sandwich lemma,” a ≤ x ≤ b.

2. The sets (a, b) , (a,∞), and (−∞, a) are all open sets. For example, (a, b)c

=(−∞, a] ∪ [b,∞) is the union of closed sets and hence closed.

3. The sets (a, b] and [a, b) are neither closed nor open.

Exercise 10.7. Let (X, d) be a metric space and C := x1, . . . , xn be a finitesubset of X. Show C is closed and hence X \ C is an open.

Corollary 10.22. Let (X, d) be a metric space. Then the collection of opensubsets, τd, of X satisfy;

Page: 61 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 68: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

62 10 Elements of Point Set (Metric) Topology

1. X and ∅ are in τd.2. τd is closed under taking arbitrary unions. i.e. if Uαα∈I is a collection of

open sets then ∪α∈IUα is open..3. τd is closed under taking finite intersections, i.e. if U and V are open sets

then then U ∩ V is open as well.

Proof. 1) Since Xc = ∅ and ∅c = X are both closed it follows that X, ∅ areopen, i.e. in τd. 2) If Uαα∈I are open sets then U cαα∈I are closed sets andtherefore ∩α∈IU cα is closed and so[∩α∈IU cα]

c= ∪α∈IUα is open. 3. If if U and

V are open then U c and V c are closed. Therefore, U c ∪ V c is closed and hence[U c ∪ V c]c = U ∩ V is open.

Theorem 10.23. Let (X, d) and (Y, dY ) be two metric spaces and f : X → Ybe a continuous function. We then have;

1. if C ⊂ Y is closed then f−1 (C) is closed in X, and2. if V ⊂ Y is open then f−1 (V ) is open in X.

Proof. If xn∞n=1 ⊂ f−1 (C) and x = limn→∞ xn exists in X, then f (xn) ∈C for all n. So by the continuity of f and C being closed it follows that

f (x) = f(

limn→∞

xn

)= limn→∞

f (xn) ∈ C =⇒ x ∈ f−1 (C)

which shows f−1 (C) is closed. If V is open in Y then V c is closed in Y andtherefore, f−1 (V )

c= f−1 (V c) is closed which implies f−1 (V ) is open.

Lemma 10.24. If f : X→ R is a continuous function and k ∈ R, then thefollowing sets are closed,

A := x ∈ X : f (x) ≤ k , B := x ∈ X : f (x) = k , and

C := x ∈ X : f (x) ≥ k .

Proof. First note that A = f−1 ((−∞, k]) , B = f−1 (k) , and C =f−1 ([k,∞)) . Since (−∞, k], k , and [k,∞) are all closed subsets of R theresult follows from Theorem 10.23.

Example 10.25. Using Exercise 10.4 along with Lemma 10.24 shows the follow-ing subsets of C are closed;

1. z ∈ C : a ≤ Im z ≤ b for all a ≤ b in R.2. z ∈ C : a ≤ Re z ≤ b for all a ≤ b in R.3. z ∈ C : Im z = 0 and a ≤ Re z ≤ b for all a ≤ b in R.

Definition 10.26 (Open/Closed Balls). The open ball B(x, δ) ⊂ X cen-tered at x ∈ X with radius δ > 0 is the set

B(x, δ) := y ∈ X : d(x, y) < δ. (10.4)

We will often also write B(x, δ) as Bx(δ).The closed ball centered at x ∈ X with radius δ > 0 as the set

Cx(δ) = C (x, δ) := y ∈ X : d(x, y) ≤ δ. (10.5)

Fig. 10.1. Balls in R2 corresponding to the 1 – norm, 2− norm, 5 – norm, and 12

–“norm.”

Lemma 10.27. Closed balls are closed and open balls are open.

Proof. Let δ > 0. As we have seen f (y) := d (x, y) is continuous andtherefore

Cx(δ) := y ∈ X : d (x, y) ≤ δ = y ∈ X : f (y) ≤ δ

is a closed set by Lemma 10.24. [Notice that x = Cx (0) is a closed set for allx ∈ X.] Similarly

Bx(δ)c := X \Bx(δ) = y ∈ X : d (x, y) ≥ δ = y ∈ X : f (y) ≥ δ

is closed and so Bx(δ) is open.

Page: 62 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 69: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10.3 Compactness 63

Proposition 10.28. Let U be a subset of a metric space (X, d) . Show the fol-lowing are equivalent;

1. U is open,2. for all z ∈ U there exists r > 0 such that Bz (r) ⊂ U.3. U can be written as a union of open balls.

Proof. 1 =⇒ 2. We will show (not 2 =⇒ not 1) . If there exists a z ∈ Usuch that Bz (r) * U for all r > 0 then for each n ∈ N there exists zn ∈Bz (1/n) \ U. We then have zn ∈ U c for all n and limn→∞ zn = z ∈ U, i.e.z /∈ U c. This shows that U is not closed. [Alternatively: by Lemma A.29below, if z ∈ U then r := dUc (z) > 0 and therefore Bz (r) ⊂ U.]

2 =⇒ 3. Given 2. for all z ∈ U there exists rz > 0 such that Bz (rz) ⊂ U.We then have that U = ∪z∈UBz (r) which shows that U may be written as aunion of open balls.

3 =⇒ 1. From Lemma 10.27 we know that open balls are open. Thus if Uis written as a union of open balls it must be open as well by Corollary 10.22.

Exercise 10.8. Give an example of a collection of closed subsets, An∞n=1 , ofC such that ∪∞n=1An is not closed.

Lemma 10.29 (Approximating open sets from the inside by closedsets). Let U ⊂ X be an open set and

Fε := x ∈ X|dUc (x) ≥ ε ⊂ X.

Then Fε is closed for all ε > 0 and Fε ↑ U as ε ↓ 0.

Proof. The set Fε is closed by Lemma 10.24 and the fact that dUc is con-tinuous. It is clear that dUc (x) = 0 for x ∈ U c so that Fε ⊂ U for each ε > 0and hence ∪ε>0Fε ⊂ U. Now suppose that x ∈ U ⊂o X. By Proposition 10.28,there exists an ε > 0 such that Bx(ε) ⊂ U, i.e. d(x, y) ≥ ε for all y ∈ U c. Hencex ∈ Fε and we have shown that U ⊂ ∪ε>0Fε. Finally it is clear that Fε ⊂ Fε′

whenever ε′ ≤ ε.It turns out that metric spaces always have lots of continuous functions.

Lemma 10.30 (Urysohn’s Lemma for Metric Spaces). Let (X, d) be ametric space and suppose that A and B are two disjoint closed subsets of X.Then

f (x) =dB (x)

dA (x) + dB (x)for x ∈ X (10.6)

defines a continuous function, f : X → [0, 1], such that f (x) = 1 for x ∈ A andf (x) = 0 if x ∈ B.

Proof. By Lemma 10.9, dA and dB are continuous functions on X. SinceA and B are closed, dA (x) > 0 if x /∈ A and dB (x) > 0 if x /∈ B. Since

A∩B = ∅, dA (x) + dB (x) > 0 for all x and (dA + dB)−1

is continuous as well.The remaining assertions about f are all easy to verify.

Sometimes Urysohn’s lemma will be use in the following form. SupposeF ⊂ V ⊂ X with F being closed and V being open, then there exists f ∈C (X, [0, 1])) such that f = 1 on F while f = 0 on V c. This of course followsfrom Lemma 10.30 by taking A = F and B = V c.

10.3 Compactness

Throughout this section, let (X, d) be a metric space.

Definition 10.31 (Open covers). Let A be a subset of a metric space, (X, d) .An open cover of A is a collection of, U , of open subsets of X such thatA ⊂ ∪V ∈UV. We further say that A has a finite subcover if there exists afinite subcollection, U0 ⊂f U such that U0 is still a cover of A.

There are two (equivalent) notions of a compact subset, K ⊂ X.

Definition 10.32 (Open cover compactness). The subset K ⊂ X is opencover compact1 if every open cover of K has finite a sub-cover. (We will writeK @@ X to denote that K ⊂ X and K is compact.)

Definition 10.33 (Sequential compactness). As subset K ⊂ X is (sequen-tially) compact if every sequence zn∞n=1 ⊂ K has a convergent subsequence,wk := znk

∞k=1 such that limk→∞ wk ∈ K.

Fortunately these two notions of compactness above are the same as thefollowing theorem states.

Theorem 10.34. If (X, d) is a metric space, then X is sequentially compact iffit is open cover compact. In the future we will just refer to compact sets without the any extra adjectives.

Proof. This theorem should be proved in Math 140. A proof may be foundin Theorem A.13 of Appendix A.

Exercise 10.9. Suppose that K and F are compact subsets of a metric space,(X, d) . Show K ∪ F is compact in X as well.

The following theorem is typically also prove in an undergraduate real anal-ysis class.

1 We will usually simply say A is compact in this case.

Page: 63 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 70: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

64 10 Elements of Point Set (Metric) Topology

Theorem 10.35 (Bolzano–Weierstrass / Heine–Borel theorem). A sub-set K ⊂ RD is compact iff it is closed and bounded.

Proof. A proof is sketched at the start of Appendix A.

Example 10.36 (Warning!). It is not true that a closed and bounded subset ofan arbitrary metric space (X, d) is necessarily compact.

The results in the next few exercises explain the importance of compact sets.As these results are so important I will supply solutions in the text.

Exercise 10.10. If (X, d) is a metric space and K ⊂ X is compact. Showsubset, C ⊂ K, which is closed is compact as well.

Proof. Let zn∞n=1 ⊂ C, then zn∞n=1 ⊂ K and therefore has a convergentsubsequence, wk := znk . As C is close limk→∞ wk ∈ C and so every sequencein zn∞n=1 ⊂ C has a convergent subsequence to an element in C, i.e. C iscompact.

Exercise 10.11. Let (X, d) and (Y, ρ) be metric spaces, K ⊂ X be a compactset, and f : K → Y be a continuous function. Show f (K) is compact in Y. Inparticular, for C ⊂ K closed, we have f (C) is closed and in fact compact in Y.

Proof. Let wn∞n=1 ⊂ f (K) be a given sequence. By definition of f (K) thisimplies there exists zn∞n=1 ⊂ K such that f (zn) = wn. Since K is compact,there is a subsequence, z′k := znk of zn∞n=1 such that limk→∞ z′k = z ∈K. Then w′k := f (z′k) is a subsequence of wn∞n=1 such that limk→∞ w′k =limk→∞ f (z′k) = f (z) ∈ f (K) . This shows that f (K) is compact. Now ifC ⊂ K is closed then it is compact by Exercise 10.10. It then follows that f (C)is compact and hence closed.

Exercise 10.12. If K ⊂ R is compact then sup (K) ∈ K and inf (K) ∈ K, i.e.sup (K) = max (K) and inf (K) = min (K) .

Proof. Choose xn ∈ K such that xn ↑ sup (K) . Since K is compact, thereexists a subsequence, yk := xnk such that limk→∞ yk exists in K. But thensup (K) = limk→∞ yk ∈ K. The proof that inf (K) = min (K) is analogous.

Exercise 10.13 (Extreme value theorem). Let K be compact subset ofX and f : K → R be a continuous function. Show −∞ < infx∈K f (x) ≤supx∈K f (x) <∞ and there exists a, b ∈ K such that f(a) = infx∈K f (x) andf(b) = supx∈K f (x) . Hint: first argue that there exists zn∞n=1 ⊂ K such thatf (zn) ↑ supx∈K f (x) as n→∞.

Proof. By Exercise 10.11, f (K) is a compact subset of R and hence

infx∈K

f (x) = inf (f (K)) = min (f (K)) and

supx∈K

f (x) = sup (f (K)) = max (f (K)) .

So if M = max (f (K)) , then M ∈ f (K) which means M = f (b) for someb ∈ K. Similarly infx∈K f (x) = f (a) for some a ∈ K.

Exercise 10.14 (Compacts are closed and Bounded). Let K be compactsubset of an arbitrary metric space (X, d) , then K is closed and bounded. [Thisproves the easy direction in Theorem 10.35.]

Proof. (K is closed.) Let xn∞n=1 ⊂ K such that xn → x ∈ X. By compact-ness, there exists a subsequence, xnk

∞k=1 , which converges to a point y ∈ K.

But by uniqueness of limits we must have x = y ∈ K and so K is closed.(K is bounded.) Let o ∈ X be fixed and then apply the extreme valued

theorem with f (x) = d (x, o) = do (x) to find R = max (f (K)) < ∞, i.e.K ⊂ Co (R) which shows K is bounded.

10.3.1 *More optional results

You may jump to Section 10.4 and skip this subsection on first reading.

Exercise 10.15 (Uniform Continuity). Let K be compact subset of X andf : K → C be a continuous function. Show that f is uniformly continuous,i.e. if ε > 0 there exists δ > 0 such that |f(z)− f(w)| < ε if w, z ∈ K withd (w, z) < δ. Hint: prove the contrapositive.

Proof. If not, there would exist ε > 0 and sequences wn and zn in K suchthat d (wn, zn) → 0 while |f(zn)− f(wn)| ≥ ε for all n. Using sequentiallycompactness of K, we may assume, by passing to subsequences if necessary,that wn → w ∈ K and zn → z ∈ K. Since d (wn, zn) → 0 we must have z = wand hence we arrive at the contradiction:

ε ≤ limn→∞

|f(zn)− f(wn)| = |f(z)− f(w)| = 0.

This variant of Exercise 10.13 is left to the reader.

Exercise 10.16 (Extreme value theorem II). Suppose that f : X → R isa continuous function on a metric space (X, d) . Further assume there exists acompact subset K ⊂ X and x0 ∈ K such that f (x0) ≤ f (x) for all x ∈ X \K.Show there exists k ∈ K such that infx∈X f (x) = f (k) .

Page: 64 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 71: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10.4 Proper maps 65

Theorem 10.37 (Fundamental Theorem of Algebra). Suppose thatp (z) =

∑nk=0 akz

k is a polynomial on C with an 6= 0 and n > 0. Then thereexists z0 ∈ C such that p (z0) = 0.

Proof. Since

lim|z|→∞

|p (z)||z|n

= lim|z|→∞

∣∣∣∣∣n−1∑k=0

akzn−k

+ an

∣∣∣∣∣ = |an| ,

if R > 0 is sufficiently large, then

|p (z)| ≥ |an| |z|n /2 ≥ |an|Rn/2 ≥ |a0| = |p (0)| for |z| > R.

Applying Exercise 10.16 with K = z ∈ C : |z| ≤ R and x0 = 0 there existsz0 ∈ K ⊂ C such that |p (z0)| ≤ |p (z)| for all z ∈ C. It now follows from Lemma10.38 below that p (z0) = 0.

Lemma 10.38. Suppose that p (z) =∑nk=0 akz

k is a polynomial on C withan 6= 0 and n > 0. If |p (z)| has a minimum at z0 ∈ C, then |p (z0)| = 0.

Proof. For sake of contradiction, let us suppose that p (z0) 6= 0 and set

q (z) := p (z0 + z) =

n∑k=0

ak (z + z0)k

=

n∑k=0

αkzk.

Then 0 < |q (0)| = |α0| ≤ |q (z)| for all z ∈ C and αn = an 6= 0. Let l ≥ 1 bethe first index such that αl 6= 0 so that

q (z) = α0 + αlzl + · · ·+ αnz

n

= α0

[1 +

αlα0zl + · · ·+ αn

α0zn]

Evaluating this at z = reiθ with θ chosen so that αlα0eilθ = −

∣∣∣ αlα0

∣∣∣ = −ε implies

for r > 0 small that

|α0| ≤∣∣q (reiθ)∣∣ = |α0|

∣∣1− εrl + cl+1rl+1 + · · ·+ cnr

n∣∣

≤ |α0|[1− εrl +

∣∣cl+1rl+1 + · · ·+ cnr

n∣∣]

≤ |α0|[1− εrl + Crl+1

]= |α0|

[1− rl [ε− Cr]

]where ck and C are certain appropriate constants. Choosing r ∈ (0, ε/C)then gives the |α0| < |α0| which is absurd and we have reached the desiredcontradiction.

Remark 10.39. The fundamental theorem of algebra does not hold for polyno-mials in z and z or in x and y. For example consider the polynomial

p (z, z) = 1 + z z = 1 + x2 + y2.

The point is that Lemma 10.38 does not hold for these more general classes ofpolynomials.

10.4 Proper maps

Again let (X, d) and (Y, ρ) be metric spaces and f : X → Y be a continuousfunction.

Example 10.40. If f : R → R is the function, f (x) = 0 for all x ∈ R, thenf is continuous, K = 0 is a compact subset of R, while f−1 (K) = R isnot compact. This shows in general that inverse images of compact sets undercontinuous maps need not be compact. It is however always true that f−1 (K)is closed in X for all compact K ⊂ Y.

Definition 10.41 (Proper Maps). A continuous function, f : X → Y, isproper if f−1 (K) is compact in X for all compact subsets of K @@ Y.

Example 10.42. If f : X → Y is a homeomorphism, then f is proper. Indeed, inthis case g = f−1 is a continuous map and hence f−1 (compact) = g (compact)is compact.

Example 10.43. Suppose that f : Rm → Rn is a continuous map such thatlim‖x‖→∞ ‖f (x)‖ = ∞, then f is proper. Indeed if K ⊂ Rn is a compact setthen it is closed and bounded and f−1 (K) is closed because f is continuous.If f−1 (K) were not bounded, there would exist xn ⊂ f−1 (K) such thatlimn→∞ ‖xn‖ = ∞ and then by assumption, limn→∞ ‖f (xn)‖ = ∞. Sincef (xn) ∈ K for all n this would imply K is not bounded which violates K beingcompact.

Let us now suppose that U and V are open subsets of Rn and f : U → V is acontinuous function. We are going to state and prove a necessary and sufficientcriteria for f to be proper. Before doing so let us record a few more propertiesinvolving compact subsets of V.

Definition 10.44. Given a non-empty open subset, V ⊂ Rn and ε > 0, let

KVε =

y ∈ V : dV c (y) ≥ ε and d (y, 0) ≤ 1

ε

and (10.7)

WVε :=

y ∈ V : dV c (y) > ε and d (y, 0) <

1

ε

. (10.8)

See Figure 10.2

Page: 65 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 72: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

66 10 Elements of Point Set (Metric) Topology

1/ε V

0

KVε

Fig. 10.2. The shaded area indicates KVε where as WV

ε is the region KVε with its

boundary removed.

Remark 10.45. Let V be an open subset of Rn and let d (x, y) = ‖x− y‖ for allx, y ∈ Rn.

1. KVε is a closed bounded subset of Rn (hence compact), WV

ε is an open setsuch that WV

ε ⊂ KVε ⊂ V and WV

ε ↑ V as ε ↓ 0.2. If K is any compact subset of V, then there exists an ε > 0 such that K ⊂KVε . Indeed

WVε

ε>0

is an open cover of K and hence K ⊂ WVε ⊂ KV

ε

for some ε > 0.3. If K is a compact subset of V, then

d (K,V c) = infx∈K

dV c (x) = minx∈K

dV c (x) > 0

because dV c (·) is continuous, dV c (x) > 0 for all x ∈ K, and K is compact.

Theorem 10.46. Suppose that U and V are open subsets of Rn and f : U → Vis a continuous function. Then f is proper iff for all ε > 0 there exits a δ =δ (ε) > 0 such that

f(U \KU

δ(ε)

)⊂ V \KV

ε .

Proof. We are first going to prove the following claim.Claim: f is proper iff for every ε > 0, there exists a δ = δ (ε) > 0 such

that f−1(KVε

)⊂ KU

δ .

Proof of claim. ( =⇒ ) If f is proper, then f−1(KVε

)is a compact subset

of U and so by item 2. of Remark 10.45 there is a δ = δ (ε) > 0 such thatf−1

(KVε

)⊂ KU

δ .(⇐= ) We now suppose that for every ε > 0, there exists a δ = δ (ε) > 0

such that f−1(KVε

)⊂ KU

δ and let K be a compact of Rn contained in V. By

item 2. of Remark 10.45, there exists ε > 0 such that K ⊂ KVε and then by the

given assumption there exists δ = δ (ε) > 0 so that

f−1 (K) ⊂ f−1(KVε

)⊂ KU

δ ⊂ U.

This shows that f−1 (K) is bounded in Rn and so f−1 (K) will be compact ifwe can show f−1 (K) is closed and also closed in Rn. To this end, suppose thatxn∞n=1 ⊂ f−1 (K) and limn→∞ xn = x exists in Rn. As KU

δ is closed we knowthat x ∈ KU

δ ⊂ U and therefore f (x) = f (limn→∞ xn) = limn→∞ f (xn) ∈ K,i.e. x ∈ f−1 (K) and so f−1 (K) is closed in Rn and the proof of the claim iscomplete.

The theorem now follows from the claim the observations that iff

f−1(KVε

)⊂ KU

δ ⇐⇒ U \KUδ ⊂ V \ f−1

(KVε

)= f−1

(V \KV

ε

)⇐⇒ f

(U \KU

δ(ε)

)⊂ V \KV

ε .

The next theorem is a somewhat informal restatement of Theorem 10.46.

Theorem 10.47 (Informal version of Theorem 10.46). A continuousfunction, f : U → V, is proper iff whenever x ∈ U is close to U c or ‖x‖ islarge, then f (x) ∈ V is either close to V c or ‖f (x)‖ is large.

Proof. Since

U \KUδ = x ∈ U : dUc (x) < δ ∪

x ∈ U : d (x, 0) >

1

δ

we see that x ∈ U \KU

δ iff x is δ-close to the boundary of U or 1/δ-far from 0.With this observation in hand, we see the informal statement of this theoremis equivalent to the statement in Theorem 10.46.

Corollary 10.48. If U and V are bounded open subsets of Rn and f : U →V is a continuous map, then f is proper iff dV c (f (x)) → 0 as x ∈ U withdUc (x)→ 0.

Corollary 10.49. Let −∞ ≤ a < b ≤ ∞, −∞ ≤ c < d ≤ ∞, and f : (a, b) →(c, d) be a continuous map. Then f is proper iff f (b−) := limx↑b f (x) andf (a+) = limx↓a f (x) both exist (in the extended sense, i.e. we also allow for±∞ as possible limits) and f (a+) , f (b−) ∈ c, d .

Proof. Suppose that f is proper. If limx↑b f (x) does not exists in the ex-tended sense then there exists c < c0 < d0 < d and a sequence xn∞n=1 ⊂ (a, b)such that xn ↑ b as n → ∞ and f (xn) ≤ c0 for n odd and f (xn) ≥ d0

for n even. By the intermediate value theorem it follows that there exists

Page: 66 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 73: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

10.5 Connectedness and the Intermediate Value Theorem 67

yn∞n=1 ⊂ f−1 (c0) such that yn ↑ b. On the other hand f−1 (c0) is com-pact and hence b = limn→∞ yn ∈ f−1 (c0) ⊂ (a, b) which is impossible.Thus it follows that f (b−) = limx↑b f (x) exists and by a similar argument,f (a+) = limx↓a f (x) exists as well. We leave it to the reader to use Theorem10.46 (or Theorem 10.47) to show that both of these limits must be in c, d .

Conversely, suppose that f (b−) := limx↑b f (x) and f (a+) = limx↓a f (x)both exist and f (a+) , f (b−) ∈ c, d . From this it follows that f (x) nearc, d when x ∈ (a, b) is near a, b. [Here for example, we say x is near a,∞iff either x is near a or x is very large and positive.]

Example 10.50. There are four possible case for the limits in Corollary 10.49.

1. f (a+) = f (b−) = c.2. f (a+) = f (b−) = d.3. f (a+) = c and f (b−) = d4. f (a+) = d and f (b−) = c.

In the first two case deg (f) = 0 while in case deg (f) = 1 and deg (f) = −1in case 4, see Figure 10.3 below.

Exercise 10.17. Explain why;

1. f : (0,∞)→ (0,∞) defined by f (x) = 1/x is a proper map.2. f : (0,∞)→ (−∞,∞) defined by f (x) = 1/x is not a proper map.

Exercise 10.18. Let V := R2 \ 0 and f : V → V be defined by f (x) = x‖x‖2

for x = (x1, x2) ∈ R2.

1. Explain why f is a proper map.2. Show f f (x) = x so that f is in fact a homeomorphism.3. Compute the deg (f) . [Hint: evaluate f ′ (p) at your favorite point in V.]

10.5 Connectedness and the Intermediate Value Theorem

Definition 10.51 (Connected Sets). Let (X, d) be a metric space. We sayA ⊂ X is a (path) connected subset of X if for all a, b ∈ A, there exists acontinuous path, σ : [0, 1]→ X, such that σ (0) = a and σ (1) = b.

Theorem 10.52 (Intermediate Value Theorem). If A ⊂ X is connected,ρ : A → R is continuous, and a, b ∈ A then for every y ∈ R between ρ (a) andρ (b) , there exists x ∈ A so that ρ (x) = y.

Proof. Choose a continuous path, σ : [0, 1] → X such that σ (0) = a andσ (1) = b and let γ (t) = ρ (σ (t)) . Then apply the intermediate valued theoremto the continuous function γ to find a to ∈ [0, 1] so that y = γ (t0) = ρ (σ (t0))and the result is proved by taking x = σ (t0) .

a b

c

d

a b

c

d

a b

c

d

Fig. 10.3. Here are three examples of proper map one dimensional maps with theirdegrees which are 0, −1, and 1 respectively. Let us also note in the first case, the mapis not surjective which always leads to a degree zero map. In the one dimensional case,the degree can only take the values 0, 1,−1 .

Page: 67 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 74: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Corollary 10.53. If U is a connected open subset of Rn and ρ : U → R \ 0is a continuous function then either ρ (x) > 0 or ρ (x) < 0 for all x ∈ U.

Proof. If there exists a, b ∈ U such that ρ (a) < 0 < ρ (b) , then by theintermediate value theorem there would exists x ∈ U such that ρ (x) = 0 whichcontradicts the assumption that ρ is never 0 on U.

Corollary 10.54. If f : U → V is a diffeomorphism of connected open sets,then sgn(det f ′ (q)) is constant for all q ∈ U. We let sgn(f) denote this constantvalue.

Proof. Take ρ (q) := det f ′ (q) ∈ R which is continuous on U and is neverzero. Indeed, if g : V → U is the inverse map, so that g (f (q)) = q for all q ∈ V,it follows by the chain rule that

g′ (f (q)) f ′ (q) = I for all q ∈ V.

From this equation it follows that f ′ (q) is invertible or equivalently ` (q) 6= 0for all q ∈ U. The result now follows from Corollary 10.53.

Remark 10.55. Suppose that A (t) is a continuous path of invertible n× n realmatrices, then sgn (detA (t)) is constant in t.

Page 75: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

11

Degree Theory and Change of Variables

11.1 Constructing elements of C∞c (open rectangles)

Here is the statement of the theorem1 we are heading to prove.

Exercise 11.1. Let

f(x) =

e−1/x if x > 0

0 if x ≤ 0,

see Figure 11.1. Show f ∈ C∞(R, [0, 1]). Here is a possible outline.

Fig. 11.1. Plot of y = f (x) .

1. First show for x > 0 and n ∈ N there is a polynomial function, pn (t) , suchthat f (n) (x) = pn

(x−1

)f (x) . For example p1 (t) = t2 and p2 (t) = t4−2t3.

2. Show limt→∞ (tne−t) = 0 for all n ∈ N (hint take logarithms) and then usethis to show limx↓0 f

(n) (x) = 0 for all n ∈ N0 = N ∪ 0 .3. Show by the mean value theorem along with induction on n that f ∈Cn (R,R) and f (k) (0) = 0 for 0 ≤ k ≤ n for all n ∈ N.

Notation 11.1 (Approximate Heavyside) For a > 0, let

Ha (x) = f (x) / (f (x) + f (a− x))

so that Ha ∈ C∞ (R, [0, 1]) such that Ha (x) = 0 for x ≤ 0, Ha (x) = 1, seeFigure 11.2. [Ha is a smooth approximation to the Heaviside function and tendsto a heaviside function in the limit as a ↓ 0.]1 This theorem is probably is the most important and deepest theorem of this course.

Fig. 11.2. This is a plot of the funciton, H2 (x) = f (x) / (f (x) + f (x− 2)) .

Definition 11.2. For −∞ < a < b <∞, let

ϕa,b (x) = f (a+ x) f (b− x) ,

see Figure 11.3.

Fig. 11.3. Here is the plots for ϕ0.5,3 (black) and ϕ−2,1 (red).

Page 76: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

70 11 Degree Theory and Change of Variables

These bump functions, ϕa,b, satisfy; 1) ϕa,b ∈ C∞c (R, [0, 1]) , 2) ϕa,b (x) > 0for x ∈ (a, b) , 3) and ϕa,b (x) = 0 for x /∈ (a, b) and in particular supp (ϕa,b) =

(a, b) = [a, b] .

Definition 11.3. For a, b ∈ Rd, let us write a < b to mean aj < bj for j ∈ [d]and when a < b, let R = (a, b) and R = [a, b] denote the open and closedrectangles respectively,

R = (a, b) := (a1, b1)× · · · × (ad, bd) and

R = [a, b] := [a1, b1]× · · · × [ad, bd] .

We further for R = (a, b) , let ϕR = ϕa1,b1 ⊗ · · · ⊗ ϕad,bd , i.e. for x =(x1, . . . , xd) ∈ Rd, let

ϕR (x) = ϕa1,b1 (x1)⊗ · · · ⊗ ϕad,bd (xd) .

Note again that 1) ϕR ∈ C∞c(Rd, [0, 1]

), 2) ϕR (x) > 0 for x ∈ (a, b) , 3)

and ϕR (x) = 0 for x /∈ (a, b) and in particular supp (ϕR) = (a, b) = [a, b] . Wewill use these functions a little later to construct many for compactly supportedsmooth functions.

Fig. 11.4. Plot of 20 · ϕR (x, y) where R =(− 1

2, 3)× (−2, 1) .

11.2 Smooth Uryshon’s Lemma and Partitions of Unity

In this section we extend the result of the previous section in order to product“partitions of unity.” This is a powerful localization technique (see Proposition11.6) that is used repeatedly in differential geometry and analysis.

Corollary 11.4 (C∞ – Uryshon’s Lemma). Let U ⊂ Rn be an open set andK ⊂ Rn be a compact set such that K ⊂ U. Then there exists f ∈ C∞c (Rn, [0, 1])such that supp (f) ⊂ U and f = 1 on K.

Proof. Since K is compact it may be covered by finitely many open rect-angles, Rjmj=1 satisfying, K ⊂ ∪mj=1Rj ⊂ ∪mj=1Rj ⊂ U. Now let ϕRj ∈C∞c (Rn, [0, 1]) be the bump function constructed in Definition 11.3 so thatϕRj (x) > 0 for x ∈ Rj and ϕRj (x) = 0 for x /∈ Rj . Then the func-tion, ψ (x) :=

∑mj=1 ϕRj (x) is smooth, supported on the compact subset,

K ′ = ∪mj=1Rj , which is contained in U, and is positive on K. By the extremevalue theorem it follows that a := minx∈K ψ (x) > 0 and so the function,f (x) := Ha (ψ (x)) (with Ha as in Notation 11.1) satisfies the required proper-ties.

Definition 11.5 (Partitions of Unity). Suppose that X is an open subset ofRn, K ⊂ X is a compact set, and U = Ujmj=1 is an open cover of K. We

say hjmj=1 ⊂ C∞c (X, [0, 1])) is a partition of unity of K relative to U if

hj ∈ C∞c (Uj , [0, 1])) for j ∈ [m] and∑mj=1 hj = 1 on K.

Proposition 11.6 (Partitions of Unity Exist). Suppose that X is an opensubset of Rn, K ⊂ X is a compact set and U = Ujmj=1 is an open cover of K.Then there exists a partition of unity of K relative to U .

Proof. For all x ∈ K choose an bounded open rectangle, Rx (or a moregeneral open precompact neighborhood, Vx if you prefer) of x such that Rx ⊂Uj . Since K is compact, there exists a finite subset, Λ, of K such that K ⊂⋃x∈Λ

Rx. Let

Fj = ∪Rx : x ∈ Λ and Rx ⊂ Uj

.

Then Fj is compact, Fj ⊂ Uj for all j, and K ⊂ ∪mj=1Fj . By Urysohn’s Lemma(Corollary 11.4) there exists fj ∈ C∞c (Uj , [0, 1]) such that fj = 1 on Fj forj = 1, 2, . . . ,m. By convention we let fm+1 ≡ 1. We will now give two methodsto finish the proof.

Method 1. Let h1 = f1, h2 = f2(1− h1) = f2(1− f1),

h3 = f3(1− h1 − h2) = f3(1− f1 − (1− f1)f2) = f3(1− f1)(1− f2)

and continue on inductively to define

hk = fk(1− h1 − · · · − hk−1) = fk ·k−1∏j=1

(1− fj)∀ k = 2, 3, . . . ,m (11.1)

while at the same time showing

Page: 70 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 77: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

11.3 The statement of the degree theorem 71

hm+1 = (1− h1 − · · · − hm) · 1 = 1 ·m∏j=1

(1− fj). (11.2)

From these equations it clearly follows that hj ∈ Cc(X, [0, 1]) and thatsupp(hj) ⊂ supp(fj) ⊂ Uj . Since

∏mj=1(1 − fj) = 0 on K,

∑mj=1 hj = 1 on

K and hjmj=1 is the desired partition of unity.

Method 2. Let g :=m∑j=1

fj ∈ C∞c (X). Then g ≥ 1 on K and hence K ⊂

g > 12. Choose ϕ ∈ Cc(X, [0, 1]) such that ϕ = 1 on K and supp(ϕ) ⊂ g > 1

2and define f0 := 1− ϕ. Then f0 = 0 on K, f0 = 1 if g ≤ 1

2 and therefore,

f0 + f1 + · · ·+ fm = f0 + g > 0

on X. The desired partition of unity may be constructed as

hj (x) =fj (x)

f0 (x) + · · ·+ fm (x).

Indeed supp (hj) = supp (fj) ⊂ Uj , hj ∈ Cc(X, [0, 1]) and on K,

h1 + · · ·+ hm =f1 + · · ·+ fm

f0 + f1 + · · ·+ fm=f1 + · · ·+ fmf1 + · · ·+ fm

= 1.

11.3 The statement of the degree theorem

Lemma 11.7. If f : U → V is a smooth proper map and ω ∈ Ωnc (V ) , thenf∗ω ∈ Ωnc (U) .

Proof. The only thing to prove here is that f∗ω is compactly supported.However this is the case since if K is a compact subset of V such that ωq = 0for q /∈ K, then (f∗ω)p = ωf(p) (f∗p (·) , . . . , f∗p (·)) = 0 if f (p) /∈ K, i.e. if

p /∈ f−1 (K) . But this shows supp (f∗ω) ⊂ f−1 (K) which is compact since fis proper.

Let us repeat the statement of Theorem 9.9 which is the main theorem wewish to prove in this chapter.

Theorem 11.8. Let U and V be connected open subsets of Rn and f : U → Vbe a smooth proper map. Then there exists an integer, deg (f) ∈ Z, such that∫

U

f∗ω = deg (f) ·∫V

ω for all ω ∈ Ωnc (V ) . (11.3)

The degree function has the following properties;

1. deg (f g) = deg (f) · deg (g) ,2. deg (f) = 1 (−1) if f is an orientation preserving (reversing) diffeomor-

phism (recall Definition 9.8),3. deg (f) = 0 if f (U) & V,4. if q ∈ V is a regular value of f, then

deg (f) =∑

p∈f−1(q)

sgn (det f ′ (p)) . (11.4)

[It is a property of the properness of f along with the inverse functiontheorem that f−1 (q) is necessarily a finite set.]

Proof. We will give the full proof of this theorem with the exception of item4 in the course of this chapter. Let us give a sketch of item 4. here and refer thereader to Section 3.6 of the text, namely see Theorem 3.6.4, for a more detailedproof.

Step 1. By Sard’s theorem (again see the book), there exists a regular value,q, for f.

Step 2. Making use of the inverse function theorem and properness of fone shows that k = #

(f−1 (q)

)<∞. Let

f−1 (q) = p1, . . . , pk .

Step 3. Again, as a consequence of the inverse function theorem, there isan open neighborhood O of q contained in V and disjoint open neighborhoodsNj ⊂ U of pj such that f−1 (O) = ∪kj=1Nj and fj := f |Nj : Nj → O is adiffeomorphism, see Figure 11.5.

Fig. 11.5. Here we suppose that q is a regular value and f−1 (q) is a set of size 4.

Step 4. Choose ω ∈ Ωnc (O) such that∫Oω = 1. Then

deg (f) =

∫U

f∗ω =

k∑j=1

∫Nj

f∗j ω =

k∑j=1

εj

∫O

ω =

k∑j=1

εj

Page: 71 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 78: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

72 11 Degree Theory and Change of Variables

where εj ∈ ± with εj = 1 if fj is orientation preserving and εj = −1 if fj isorientation reversing, i.e. εj = sgn(det f ′ (pj)).

Exercise 11.2. As mentioned in class, the study of complex variables is essen-tially the study of functions (or vector-fields if you prefer) f ∈ C1

(U → R2

)(U is an open subset of R2) of the form f (x, y) = (u (x, y) , v (x, y))

trsuch that

u and v satisfy the Cauchy Riemann equations;

uy = −vx and vv = ux on U. (11.5)

Show under these assumptions that

det f ′ = u2x + u2

y = v2x + v2

y ≥ 0 on U.

Remark 11.9. For who have taken Math 120A, you have seen lots of functionsf as in Exercise 11.2. For example you will have learned that

u (x, y) = Re zn = Re [(x+ iy)n] and v (x, y) = Im zn = Im [(x+ iy)

n] .

satisfy Eqs. (11.5). You will have also found explicitly the n-roots of the equationzn = 1. Given this we may conclude that for f = (u, v)

trwith u and v as above

that deg (f) = n which is consistent with the deg (zn) as a polynomial functionin z. On the other hand if g (x, y) = f (x,−y) which now corresponds to zn, wewill have deg (g) = −n. The point is that g = f R where R (x, y) = (x,−y)and so deg (g) = deg (f) · deg (R) = n · (−1) = −n. See Theorem 3.6.17 of thebook which generalizes this result to general polynomial functions of z and thenused it to prove all such polynomials have roots.

Example 11.10. Let f (x, y) = (x+ iy)3

= x3 − 3xy2 + i(3x2y − y3

), i.e. f =

u+ iv where u = x3 − 3xy2 and v = 3x3 = y3. Let us note that

fx = 3x2 − 3y2 + i6xy

fy = −6xy2 + i(3x2 − 3y2

)= ifx

so that f satisfies the C.R. Equations. As z3 = 1 has three roots, namely 1,eiπ/3, and ei2π/3 it follows that deg (f) = 3.

Let us now write out Eq. (11.3) in terms of functions rather than forms. Letx = (x1, . . . , xn) and y = (y1, . . . , yn) be the standard coordinates on U and Vrespectively.2 Then let

ω = g (y) dy1 ∧ · · · ∧ dyn ∈ Ωnc (V )

in which case we have seen that (let yj = fj (x)) that

2 So x = y but it is safer to keep them separate.

f∗ω = g f (x) · det f ′ (x) dx1 ∧ · · · ∧ dxn.

Then with notation, Eq. (11.3) becomes,∫U

g f (x) · det f ′ (x) dx1 ∧ · · · ∧ dxn = deg (f)

∫V

g (y) dy1 ∧ · · · ∧ dyn. (11.6)

This can be remembered as follows. Let us make the change of variables, yj =fj (x) = fj (x1, . . . , xn) , then one has

g (y) dy1 ∧ · · · ∧ dyn = g f (x) · df1 (x) ∧ · · · ∧ dfn (x)

= g f (x) · det f ′ (x) dx1 ∧ · · · ∧ dxn

and so then notation suggests that∫V

g (y) dy1 ∧ · · · ∧ dyn =

∫U

g f (x) · det f ′ (x) dx1 ∧ · · · ∧ dxn

which is not quite correct but only off by the factor of deg (f) . The followingtheorem records this result without the differential form language.

Theorem 11.11. Let f : U → V and deg (f) ∈ Z be as in be as in Theorem11.8, then for all g ∈ C∞c (V ) we have∫

U

g f · det f ′dm = deg (f)

∫V

gdm. (11.7)

Corollary 11.12. Let f : U → V and deg (f) ∈ Z be as in be as in Theorem11.8. If g : V → R is a (measurable) function such that∫

V

|g| dm+

∫U

|g f | · |det f ′| dm <∞, (11.8)

then Eqs. (11.6) and (11.7) still hold!

Proof. The proof follows by standard approximation theorems in the theoryof the Lebesgue integral and we omit this proof here.

Exercise 11.3. Using Eq. (11.6) (justified by Corollary 11.12) show; if g : R2 →R is a bounded compactly supported function, then∫

R2

g(x2 − y2, 2xy

)4(x2 + y2

)dx ∧ dy = 2 ·

∫R2

g (u, v) du ∧ dv.

Hint: make the “change of variables” (u, v) = f (x, y) =(x2 − y2, 2xy

)and use

your results from Exercises 9.1.

Page: 72 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 79: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

11.4 Proof of the Degree Theorem 73

Exercise 11.4. Use Exercises 11.3 to find the explicit value for the followingintegral, ∫

R2

1(1,2)

(x2 − y2

)· 1(0,1) (2xy) ·

(x4 − y4

)dx ∧ dy.

Hint: (x4 − y4

)=(x2 − y2

) (x2 + y2

)= u

(x2 + y2

).

Exercise 11.5. Let D be a connected open subset of Rk, U be an open subsetof Rn, γ : D → U be a parameterized k-surface in U as in Definition 9.3. Furtherassume that ω ∈ Ωk (U) is such that γ∗ω ∈ Ωkc (D) . If Q is another connectedopen subset of Rk and ϕ : Q→ D is a diffeomorphism, show∫

γϕω = deg (ϕ) ·

∫γ

ω

where in this case deg (ϕ) ∈ ±1 .

11.4 Proof of the Degree Theorem

Proposition 11.13. If U ⊂o Rn and µ ∈ Ωn−1c (U) , then

∫Udµ = 0.

Proof. Let Vol = dx1 ∧ · · · ∧ dxn and choose g = (g1, g2, . . . , gn) withgi ∈ C∞c (U)“⊂”C∞c (Rn) such that

µ = ig Vol =

n∑j=1

(−1)j−1

gj · dx1 ∧ · · · ∧ dxj ∧ · · · ∧ dxn.

Then as you have shown in homework, dµ = (∇ · g) Vol and hence∫U

dµ =

∫Rn

(∇ · g) (t1, . . . , tn) dt1 . . . dtn

=

∫Rn

n∑j=1

∂jg (t1, . . . , tn) dt1 . . . dtn

=

n∑j=1

∫Rn∂jg (t1, . . . , tn) dt1 . . . dtn = 0

wherein we have used Fubini’s theorem and the one dimensional fundamentaltheorem of calculus to show∫

R∂jg (t1, . . . , tn) dtj = 0 for each j ∈ [n] .

The proof of the next important theorem (which is a partial converse toProposition 11.13) will be delayed until Chapter 12 below.

Theorem 11.14 (A Poincare Lemma). If U ⊂o Rn is connected and ω ∈Ωnc (U) with

∫Uω = 0, then ω = dµ for some µ ∈ Ωn−1

c (U) .

Proof. This is an easy consequence of Theorem 12.5 in Chapter 12. Then = 1 case was the content of book Exercise 3.2.i . Let me at least point outhere the basic idea of how to go from the local version of the Poincare lemmato the global version. Here is the claim.

Claim: If U1 and U2 are open connected sets such that U1∩U2 6= ∅ and thePoincare lemma holds on U1 and U2 separately then it also holds on U = U1∪U2.

Proof of claim. Let ω0 ∈ Ωnc (U1 ∩ U2) be chosen so that∫ω0 = 1. Now

suppose that ω ∈ Ωnc (U) and K = supp (ω) is the compact support of ω. Leth1, h2 be a partition of unity of K (i.e. hi ∈ C∞c (Ui, [0, 1]) and h1 + h2 = 1on K) and let ci :=

∫hiω. Then hiω − ciω0 ∈ Ωnc (Ui) with

∫(hiω − ciω0) = 0

and so by assumption there exists µi ∈ Ωnc (Ui) such that

hiω − ciω0 = dµi for i = 1, 2.

Summing this equation on i, using h1ω + h2ω = ω and

c1 + c2 =

∫h1ω +

∫h2ω =

∫(h1ω + h2ω) =

∫ω = 0,

we find that ω = dµ1 + dµ2 = dµ, where µ = µ1 + µ2 ∈ Ωn−1c (U) .

Theorem 11.15 (Defining the Degree). Let U, V be connected open sub-sets of Rn and suppose that f : U → V is a smooth proper map. There thereexists a real number, deg (f) ∈ R (we eventually see that deg (f) ∈ Z) such that∫

U

f∗ω = deg (f)

∫V

ω for all ω ∈ Ωnc (V ) .

Proof. Let ω0 ∈ Ωnc (V ) be chosen so that∫Vω0 = 1 and then define,

deg (f) :=

∫U

f∗ω0.

Now suppose that ω ∈ Ωnc (V ) and c :=∫Vω. Then

∫V

[ω − cω0] = 0 and henceω − cω0 = dα for some α ∈ Ωn−1

c (V ) . Thus we have∫U

f∗ω = c

∫U

f∗ω0 +

∫U

f∗dα

= c · deg (f) +

∫U

d (f∗α)

where f∗α ∈ Ωn−1c (U) – it is compactly supported because f is proper! There-

fore∫Ud (f∗α) = 0 and we conclude that

Page: 73 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 80: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

74 11 Degree Theory and Change of Variables∫U

f∗ω = deg (f) · c = deg (f)

∫V

ω.

Corollary 11.16. Suppose that U, V,W are connected open subsets of Rn andf : U → V and g : V →W are smooth proper maps,

f gU −→ V −→ W. −→

g f

Then g f : U →W is proper and deg (g f) = deg (g) deg (f) .

Proof. If K is a compact subset of W then (g f)−1

(K) = f−1(g−1 (K)

)is compact since both f and g are proper and so pull-back compact sets tocompact sets. Moreover if ω ∈ Ωnc (W ) with

∫Wω = 1, then g∗ω ∈ Ωnc (V ) and

(g f)∗ω = f∗g∗ω ∈ Ωnc (U) by Lemma 11.7. So on one hand,∫

U

(g f)∗ω = deg (g f)

∫W

ω = deg (g f)

while on the other hand,∫U

(g f)∗ω =

∫U

f∗ (g∗ω) = deg (f)

∫V

g∗ω

= deg (f) · deg (g)

∫W

ω = deg (f) · deg (g) .

Lemma 11.17. If U and V are open subsets of Rn and f : U → V is a propermap, then f (U) is a closed subset of V. [Note we are not saying f (U) is closedas a subset of Rn!] More generally, if C is a closed subset of U, then f (C) isclosed subset of V.

Proof. We will give the proof when C = U and leave it to the reader tosee that the proof we give works more generally. Let yn∞n=1 ⊂ f (U) ⊂ Vbe a sequence which we assume converges to some y ∈ V. By assumption wemay choose xn ∈ U so that yn = f (xn) for all n. As you should verify, K :=y1, y2, . . . ∪ y is a compact subset of V and therefore f−1 (K) is compactand contains xn∞n=1 . Thus we may find a subsequence, xnk

∞k=1 such that

x = limk→∞ xnk exists in f−1 (K) ⊂ U. Therefore it follows that

y = limk→∞

f (xnk) = f

(limk→∞

xnk

)= f (x) ∈ f (U) ⊂ V.

Thus we have shown f (U) is closed.

Corollary 11.18. If U and V are connected open subsets of Rn and supposethat f : U → V is a smooth proper map which is not surjective, i.e. f (U) V,then deg (f) = 0.

Proof. By assumption V \ f (U) is open and therefore we can find ω ∈Ωnc (V \ f (U)) so that

∫Vω = 1. However, f∗ω is now identically zero, see the

proof of Lemma 11.7, and therefore,

deg (f) =

∫U

f∗ω = 0.

11.5 Degree’s of Diffeomorphisms

Let us recall the following lemma.

Lemma 11.19 (Smooth bump functions). There exists ρ ∈ C∞c (Rd, [0,∞))such that ρ (0) > 0, supp(ρ) ⊂ B (0, 1) and

∫Rd ρ (x) dx = 1. By translation and

scaling this shows there exists

Proof. Define h(t) = f(1−t)f(t+1) where f is as in Exercise 11.1. Then h ∈C∞c (R, [0, 1]), supp(h) ⊂ [−1, 1] and h (0) = e−2 > 0. Define c =

∫Rd h(|x|2)dx.

Then ρ (x) = c−1h(|x|2) is the desired function.

Theorem 11.20. Let f : U → V be a smooth diffeomorphism of connectedopen sets. Then deg (f) = sgn (det f ′ (p0)) ∈ ±1 for any choice of p0 ∈ U.

Proof. Let q0 = f (p0) . We will use the approximate δ-function ideas inorder to compute the degree of a diffeomorphism. Here is the idea. Let ρ ∈C∞c (B (0, 1) , [0,∞)) such that

∫ρdm = 1 and for ε > 0 let ρε (y) := 1

εn ρ (y/ε)for all y ∈ Rn. We then define for ε > 0 sufficiently small

ωε := ρε (y − q0) dy[n] ∈ Ωnc (V ) and

∫V

ωε = 1.

Thus we have already seen that deg (f) =∫Uf∗ωε for any ε > 0 and so we also

have

deg (f) = limε↓0

∫f∗ωε.

We now are going to compute this limit. To this end we use

f∗ωε = ρε (f (x)− q0) det f ′ (x) dx[n]

so that

Page: 74 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 81: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

11.6 *Appendix: Constructing the dominating function 75∫f∗ωε =

∫ρε (f (x)− q0) det f ′ (x) dm (x)

=

∫1

εnρ ((f (x)− q0) /ε) det f ′ (x) dm (x) .

In this last integral we make the linear change of variables, x = p0 +εz in orderto find ∫

f∗ωε =

∫ρ ((f (p0 + εz)− q0) /ε) det f ′ (p0 + εz) dm (z) .

Formally letting ε ↓ 0 this shows

deg (f) = limε↓0

∫f∗ωε

= limε↓0

∫ρ ((f (p0 + εz)− q0) /ε) det f ′ (p0 + εz) dm (z) (11.9)

=

∫limε↓0

[ρ ((f (p0 + εz)− q0) /ε) det f ′ (p0 + εz)] dm (z) (11.10)

=

∫ρ (f ′ (p0) z) det f ′ (p0) dm (z)

= sgn(det f ′ (p0))

∫ρ (f ′ (p0) z) |det f ′ (p0)| dm (z)

= sgn(det f ′ (p0))

∫ρ (y) dm (y) = sgn(det f ′ (p0))

and so if we can justify switching the limit with the integral in Eq. (11.10)the proof will be complete. This interchange of the limit with the integral willfollow by the dominated convergence theorem where we produce an appropriatedominating function, see Eq. (11.12) of Corollary 11.24 in the appendix of thischapter.

The next theorem now summarizes the general form of the change of vari-ables theorem we will need for the rest of these notes.

Theorem 11.21 (Changes of Variables). Suppose that f : U → V is diffeo-morphism of (not necessarily connected) open sets. Then for any ω ∈ Ωnc (V ) ,we have ∫

V

ω =

∫U

sgn (det f ′) · f∗ω. (11.11)

Proof. Case 1. If U and hence V is connected, then sgn (det f ′) = deg (f) ∈±1 is constant and hence∫

U

sgn (det f ′) · f∗ω = deg (f)

∫U

f∗ω

= [deg (f)]2∫V

ω =

∫V

ω.

Case 2. Let K = supp (ω) and let Vj`j=1 be the connected components of V

such that K ∩Vj 6= ∅ and let Uj = f−1 (Vj) . Then making repeated use of case1. shows, ∫

U

sgn (det f ′) · f∗ω =∑j=1

∫Uj

sgn (det f ′) · f∗ω

=∑j=1

∫Vj

f∗ω =

∫V

ω.

Remark 11.22. If ω = gdx1 ∧ · · · ∧ dxn where xjnj=1 are the standard coordi-nates on Rn, then

f∗ω = g f · (det f ′) dx1 ∧ · · · ∧ dxn

and sosgn (det f ′) · f∗ω = g f · |det f ′| · dx1 ∧ · · · ∧ dxn

and Eq. (11.11) becomes the usual vector calculus change of variables formula,∫V

gdm =

∫U

g f · |det f ′| dm.

11.6 *Appendix: Constructing the dominating function

[You may omit reading this subsection on first reading.] Before we can producethe desired dominating function we need to bound the size of the support ofthe function, z → ρ ((f (p0 + εz)− q0) /ε) . The next proposition is the keyingredient to finding such bound.

Proposition 11.23. Let f : U → V be a C1-diffeomorphism of open sets,p0 ∈ U, and q0 = f (p0) ∈ V. Then there exists ε0 > 0 and C = C (f, p0, ε0) <∞such that for all ε ∈ (0, ε0],

x ∈ U : |f (x)− q0| < ε = f−1 (B (q0, ε)) ⊂ B (p0, Cε) .

Proof. Let S = f−1 : V → U with S (q0) = p0. By the fundamental theoremof calculus,

S (q0 + x) = S (q0) +

∫ 1

0

S′ (q0 + tx)xdt = p0 + S′ (q0)x+ γ (x)x

where

Page: 75 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 82: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

γ (x) :=

∫ 1

0

[S′ (q0 + tx)− S′ (q0)]xdt.

Therefore given ε > 0 small,

|S (q0 + x)− p0| ≤ (|S′ (q0)|+ γε) |x| for |x| ≤ ε,

where γε := max|x|≤ε |γ (x)| → 0 as ε ↓ 0. If we let C = |S′ (q0)| + 1, it thenfollows for all ε > 0 sufficiently small that

|S (q0 + x)− p0| ≤ C |x| ∀ |x| ≤ ε

or equivalently stated,

|S (y)− p0| ≤ C |y − p0| ≤ Cε for all y ∈ B (q0, ε).

and hencef−1 (B (q0, ε)) = S ((B (q0, ε))) ⊂ B (p0, Cε) .

Corollary 11.24. Continuing the notation in Eq. (11.9), there exists ε0 > 0and C,M <∞ such that for all ε ∈ (0, ε0]

|ρ ((f (p0 + εz)− q0) /ε) det f ′ (p0 + εz)| ≤M · 1|z|≤C ∀ z ∈ Rn. (11.12)

Proof. If ε0 > 0 and C ∈ (0,∞) are the constants introduced in Proposition11.23, then

ρ ((f (p0 + εz)− q0) /ε) > 0 =⇒ |f (p0 + εz)− q0| /ε < 1

⇐⇒ |f (p0 + εz)− q0| < ε

⇐⇒ p0 + εz ∈ f−1 (B (q0, ε))

=⇒ p0 + εz ∈ B (p0, Cε)

=⇒ |z| ≤ C.

This shows that z → ρ ((f (p0 + εz)− q0) /ε) is supported on B (0, C). To com-plete the proof, let M1 = supx∈Rn |ρ (x)| , M2 ≥ sup|x|≤ε0C |det f ′ (p0 + x)| , andM = M1 ·M2, in which case we find for all ε ∈ (0, ε0] that

|ρ ((f (p0 + εz)− q0) /ε) det f ′ (p0 + εz)| ≤M1 ·M21|z|≤C = M1|z|≤C .

Page 83: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

12

The Poincare Lemma Proof

The goal of this chapter is to prove Poincare Lemma in Theorem 11.14above. The outline of this chapter is as follows.

1. Section 12.1 gives a necessary conditions in order to differentiate past theintegral.

2. In Section 12.2 we will prove Theorem 11.14 when U is an open rectangle.3. Lastly in Section 12.3 we combine the results in Sections 12.2 and 11.2 to

prove Theorem 12.5 below which is equivalent to the Poincare lemma inTheorem 11.14.

12.1 Differentiating past the integral

Corollary 12.1 (Differentiation Under the Integral). Suppose that J ⊂ Ris an open interval and f : J × Rd → R is a function such that

1. x→ f(t, x) is measurable for each t ∈ J.2. f(t0, ·) ∈ L1(m) for some t0 ∈ J.3. ∂f

∂t (t, x) exists for all (t, x).

4. There is a function g ∈ L1 (m) such that∣∣∣∂f∂t (t, ·)

∣∣∣ ≤ g for each t ∈ J.

Then f(t, ·) ∈ L1 (m) for all t ∈ J (i.e.∫Rd |f(t, x)| dm (x) < ∞), t →∫

Rd f(t, x)dm (x) is a differentiable function on J, and

d

dt

∫Rdf(t, x)dm (x) =

∫Rd

∂f

∂t(t, x)dm (x) .

Proof. By the mean value theorem,

|f(t, x)− f(t0, x)| ≤ g (x) |t− t0| for all t ∈ J (12.1)

and hence

|f(t, x)| ≤ |f(t, x)− f(t0, x)|+ |f(t0, x)| ≤ g (x) |t− t0|+ |f(t0, x)| .

This shows f(t, ·) ∈ L1 (m) for all t ∈ J. Let G(t) :=∫Rd f(t, x)dm (x) , then

G(t)−G(t0)

t− t0=

∫Rd

f(t, x)− f(t0, x)

t− t0dm (x) .

By assumption,

limt→t0

f(t, x)− f(t0, x)

t− t0=∂f

∂t(t, x) for all x ∈ Rd

and by Eq. (12.1),∣∣∣∣f(t, x)− f(t0, x)

t− t0

∣∣∣∣ ≤ g (x) for all t ∈ J and x ∈ Rd.

Therefore, we may apply the dominated convergence theorem to conclude

limn→∞

G(tn)−G(t0)

tn − t0= limn→∞

∫Rd

f(tn, x)− f(t0, x)

tn − t0dm (x)

=

∫Rd

limn→∞

f(tn, x)− f(t0, x)

tn − t0dm (x)

=

∫Rd

∂f

∂t(t0, x)dm (x)

for all sequences tn ∈ J \ t0 such that tn → t0. Therefore, G(t0) =

limt→t0G(t)−G(t0)

t−t0 exists and

G(t0) =

∫Rd

∂f

∂t(t0, x)dm (x) .

12.2 A Local Poincare Lemma

Theorem 12.2 (A Local Poincare Lemma). Let n ∈ N, −∞ < ai < bi <∞for i ∈ [n] , and R be the open rectangle,

R = (a, b) = (a1, b1)× · · · × (an, bn) ⊂ Rn.

If ω ∈ Ωnc (R) satisfies,∫Rω = 0, then there exits µ ∈ Ωn−1

c (R) such thatω = dµ.

Page 84: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

78 12 The Poincare Lemma Proof

Proof. Let Vol = dx1 ∧ · · · ∧ dxn so that ω = f Vol for some f ∈ C∞c (R) .Further let g = (g1, g2, . . . , gn) with gi ∈ C∞c (R) and then set

µ = ig Vol =

n∑j=1

(−1)j−1

gj · dx1 ∧ · · · ∧ dxj ∧ · · · ∧ dxn

so, as you have proved, dµ = (∇ · g) Vol . Thus we may reformulate the probleminto given f ∈ C∞c (R) such that

∫Rfdm = 0, we want to find gi ∈ C∞c (R)

such that f = ∇ · g. The proof will be by induction on n.For n = 1 this is easily done1 using

g (x) =

∫ x

−∞f (t) dt.

Note that g (x) = 0 when x < b but sufficiently close to b, since then

g (x) =

∫ x

−∞f (t) dt = g (x) =

∫ b

a

f (t) dt = 0.

For the inductive step, let us write (x, y) ∈ R× Rn−1, and then set

h (x, y) :=

∫ x

−∞f (t, y) dt.

We then have ∂1h (x, y) = f (x, y) as desired, however

h (∞, y) :=

∫Rf (t, y) dt 6= 0.

To fix this we choose ρ ∈ C∞c ((a1, b1) , [0,∞)) so that∫R ρ (t) dt = 1 and then

letf (x, y) := f (x, y)− ρ (x)h (∞, y)

which is still compactly supported in R. We then let

g1 (x, y) =

∫ x

−∞f (t, y) dt = h (x, y)−

∫ x

−∞ρ (t) dt · h (∞, y)

which is again compactly supported in R. Moreover,

∂1g (x, y) = f (x, y)− ρ (x)h (∞, y) .

Since ∫Rn−1

h (∞, y) dy =

∫Rnf (x, y) dxdy = 0,

1 In fact, you already did this in the book Exercise 3.2.i !

we may use the induction hypothesis to find g2 (y) , . . . , gn (y) with compactsupport in

∏ni=2 (ai, bi) so that

h (∞, y) =n∑j=2

∂j gj (y) .

It then follows that taking gj (x, y) = ρ (x) gj (x, y) gives g so that

n∑j=1

∂jgj (x, y) = f (x, y)− ρ (x)h (∞, y) + ρ (x)

n∑j=2

∂j gj (y)

= f (x, y)− ρ (x)h (∞, y) + ρ (x)h (∞, y) = f (x, y)

as desired.

12.3 Poincare Lemma for connected sets

Lemma 12.3. Suppose that R and Q are two open rectangles such that R∩Q 6=∅, ω ∈ Ωnc (R) and α ∈ Ωnc (Q) are such that

∫ω =

∫α. Then there exists

µ ∈ Ωn−1c (R ∪Q) such that ω − α = dµ.

Proof. Choose ω0 ∈ Ωnc (R ∩Q) such that∫ω0 =

∫ω =

∫α. Then we

know there exits µ1 ∈ Ωn−1c (R) and µ2 ∈ Ωn−1

c (Q) so that ω − ω0 = dµ1 andα− ω0 = dµ2. Subtracting these equations shows

ω − α = d (µ1 − µ2) = dµ

where µ = µ1 − µ2 ∈ Ωn−1c (R ∪Q) .

Lemma 12.4. Suppose that U is a connected open subset of Rn and R and Qbe open rectangles such that R ∪ Q ⊂ U. If ω ∈ Ωnc (R) and α ∈ Ωnc (Q) aresuch that

∫ω =

∫α, then there exists µ ∈ Ωn−1

c (U) such that ω − α = dµ.

Proof. Choose a chain of rectangles Rjmj=0 (see Figure 3.3.1 on page 87

of the book) such that R0 = R and Rm = Q and Rj ∩Rj−1 6= ∅ for 1 ≤ j ≤ m.Then choose ωj ∈ Ωnc (Rj) for 1 ≤ j < n such that

∫ωj =

∫ω for all j. We

further let ω0 = ω and ωn = α. Then by Lemma 12.3 there exists

µj ∈ Ωn−1c (Rj−1 ∪Rj) ⊂ Ωn−1

c (U) for 1 ≤ j < n,

such that ωj − ωj−1 = dµj . Summing this last equation on j shows

α− ω =

m∑j=1

(ωj − ωj−1) =

m∑j=1

dµj = dµ

where µ =∑mj=1 µj ∈ Ωn−1

c (U) .

Page: 78 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 85: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Theorem 12.5. If U ⊂o Rn is connected and ω ∈ Ωnc (U) with∫Uω = 0, then

ω = dµ for some µ ∈ Ωn−1c (U) .

Proof. Fix ω0 ∈ Ωnc (Q) such that∫ω0 = 1 and Q ⊂ U. We are going to

show for each ω ∈ Ωnc (U) there exists µ ∈ Ωn−1c (U) so that

ω =

(∫ω

)· ω0 + dµ (12.2)

which certainly suffices to prove the theorem.Let R := RjNj=0 be a covering of K = supp (ω) ⊂ U by open rectangles

such that Rj ⊂ U for all j and then let hjNj=0 be a partition of unity subor-

dinate to R such that∑Nj=0 hj = 1 on K. Also let ωj := hjω and cj =

∫ωj for

each j. Then by multiple uses of Lemma 12.4, there exists µj ∈ Ωn−1c (U) such

that ωj − cjω0 = dµj for 0 ≤ j ≤ N. Summing this identity on j then shows

ω =

N∑j=0

ωj =

N∑j=0

(cjω0 + dµj) =

N∑j=0

cj

ω0 + dµ

where

µ :=

N∑j=0

µj ∈ Ωn−1c (U) and

N∑j=0

cj =

N∑j=0

∫ωj =

∫ N∑j=0

ωj =

∫ω.

Altogether, the last displayed equations prove Eq. (12.2).

Page 86: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 87: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part IV

Calculus on Manifolds

Page 88: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 89: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

13

Manifold Primer

Conventions:

1. If A,B are linear operators on some vector space, then [A,B] := AB −BAis the commutator of A and B.

2. If X is a topological space we will write A ⊂o X, A @ X and A @@ X tomean A is an open, closed, and respectively a compact subset of X.

3. Given two sets A and B, the notation f : A → B will mean that f isa function from a subset D(f) ⊂ A to B. (We will allow D(f) to be theempty set.) The set D(f) ⊂ A is called the domain of f and the subsetR(f) := f(D(f)) ⊂ B is called the range of f. If f is injective, let f−1 :B → A denote the inverse function with domain D(f−1) = R(f) and rangeR(f−1) = D(f). If f : A → B and g : B → C, then g f denotes thecomposite function from A to C with domain D(g f) := f−1(D(g)) andrange R(g f) := g f(D(g f)) = g(R(f) ∩ D(g)).

Notation 13.1 Throughout these notes, let E and V denote finite dimensionalvector spaces. A function F : E → V is said to be smooth if D(F ) is open in E(D(F ) = ∅ is allowed) and F : D(F ) → V is infinitely differentiable. Given asmooth function F : E → V, let F ′(x) denote the differential of F at x ∈ D(F ).Explicitly, F ′(x) = DF (x) denotes the linear map from E to V determined by

DF (x) a = F ′(x)a :=d

dt|0F (x+ ta) ∀ a ∈ E. (13.1)

We also let

F ′′ (x) (v, w) = F ′′ (x) (v, w) := (∂v∂wF ) (x) =d

dt|0d

ds|0F (x+ tv + sw) .

(13.2)

13.1 Imbedded Submanifolds

Rather than describe the most abstract setting for Riemannian geometry, forsimplicity we choose to restrict our attention to imbedded submanifolds of aEuclidean space E = RN . 1 We will equip RN with the standard inner product,

1 Because of the Whitney imbedding theorem (see for example Theorem 6-3 in Aus-lander and MacKenzie [?]), this is actually not a restriction.

〈a, b〉 = 〈a, b〉RN :=

N∑i=1

aibi.

In general, we will denote inner products in these notes by 〈·, ·〉.

Definition 13.2. A subset M of E (see Figure 13.1) is a d – dimensionalimbedded submanifold (without boundary) of E iff for all m ∈M, there is afunction z : E → RN such that:

1. D(z) is an open neighborhood of E containing m,2. R(z) is an open subset of RN ,3. z : D(z)→ R(z) is a diffeomorphism (a smooth invertible map with smooth

inverse), and4. z(M ∩ D(z)) = R(z) ∩ (Rd × 0) ⊂ RN .

(We write Md if we wish to emphasize that M is a d – dimensional mani-fold.)

Fig. 13.1. An imbedded one dimensional submanifold in R2.

Notation 13.3 Given an imbedded submanifold and diffeomorphism z as in theabove definition, we will write z = (z<, z>) where z< is the first d components

Page 90: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

84 13 Manifold Primer

of z and z> consists of the last N − d components of z. Let x : M → Rd denotethe function defined by D(x) := M ∩ D(z) and x := z<|D(x).

Notice that R(x) := x(D(x)) is an open subset of Rd and that x−1 : R(x)→D(x), thought of as a function taking values in E, is smooth.

Definition 13.4 (Charts). The bijection x : D(x) → R(x) is called a charton M. Let A = A(M) denote the collection of charts on M. The collection ofcharts A = A(M) is often referred to as an atlas for M.

Remark 13.5. The imbedded submanifold M is made into a metric space usinga Euclidean distance metric on E. With this metric topology, each chart x ∈A(M) is a homeomorphism from D(x) ⊂o M to R(x) ⊂o Rd.

Theorem 13.6 (A Basic Construction of Manifolds). Let F : E → RN−dbe a smooth function and M := F−1(0) ⊂ E which we assume to be non-empty. Suppose that F ′(m) : E → RN−d is surjective for all m ∈ M. Then Mis a d – dimensional imbedded submanifold of E.

Proof. Let m ∈ M, we will begin by constructing a smooth function G :E → Rd such that (G,F )′(m) : E → RN = Rd×RN−d is invertible. To do this,let X = Nul(F ′(m)) and Y be a complementary subspace so that E = X ⊕ Yand let P : E → X be the associated projection map, see Figure 13.2. Noticethat F ′(m) : Y → RN−d is a linear isomorphism of vector spaces and hence

dim(X) = dim(E)− dim(Y ) = N − (N − d) = d.

In particular, X and Rd are isomorphic as vector spaces. Set G(m) = APmwhere A : X → Rd is an arbitrary but fixed linear isomorphism of vectorspaces. Then for x ∈ X and y ∈ Y,

(G,F )′(m)(x+ y) = (G′(m)(x+ y), F ′(m)(x+ y))

= (AP (x+ y), F ′(m)y) = (Ax, F ′(m)y) ∈ Rd × RN−d

from which it follows that (G,F )′(m) is an isomorphism.By the inverse function theorem, there exists a neighborhood U ⊂o E of m

such that V := (G,F )(U) ⊂o RN and (G,F ) : U → V is a diffeomorphism.Let z = (G,F ) with D(z) = U and R(z) = V. Then z is a chart of E about msatisfying the conditions of Definition 13.2. Indeed, items 1) – 3) are clear byconstruction. If p ∈ M ∩ D(z) then z(p) = (G(p), F (p)) = (G(p), 0) ∈ R(z) ∩(Rd × 0). Conversely, if p ∈ D(z) is a point such that z(p) = (G(p), F (p)) ∈R(z) ∩ (Rd × 0), then F (p) = 0 and hence p ∈ M ∩ D(z); so item 4) ofDefinition 13.2 is verified.

Example 13.7. Let gl(n,R) denote the set of all n×n real matrices. The followingare examples of imbedded submanifolds.

Fig. 13.2. Constructing charts for M using the inverse function theorem. For sim-plicity of the drawing, m ∈M is assumed to be the origin of E = X ⊕ Y.

1. Any open subset M of E.2. The graph,

Γ (f) :=

(x, f(x)) ∈ Rd × RN−d : x ∈ D (f)⊂ D (f)× RN−d ⊂ RN ,

of any smooth function f : Rd → RN−d as can be seen by applying Theorem13.6 with F (x, y) := y − f (x) . In this case it would be a good idea forthe reader to produce an explicit chart z as in Definition 13.2 such thatD (z) = R (z) = D (f)× RN−d.

3. The unit sphere, SN−1 := x ∈ RN : 〈x, x〉RN = 1, as is seen by applyingTheorem 13.6 with E = RN and F (x) := 〈x, x〉RN −1. Alternatively, expressSN−1 locally as the graph of smooth functions and then use item 2.

4. GL(n,R) := g ∈ gl(n,R)|det(g) 6= 0, see item 1.5. SL(n,R) := g ∈ gl(n,R)|det(g) = 1 as is seen by taking E = gl(n,R)

and F (g) := det(g) and then applying Theorem 13.6 with the aid of Lemma13.8 below.

6. O(n) := g ∈ gl(n,R)|gtrg = I where gtr denotes the transpose of g. Inthis case take F (g) := gtrg − I thought of as a function from E = gl(n,R)to S(n), where

S(n) :=A ∈ gl(n,R) : Atr = A

is the subspace of symmetric matrices. To show F ′(g) is surjective, show

F ′(g)(gB) = B +Btr for all g ∈ O(n) and B ∈ gl(n,R).

7. SO(n) := g ∈ O(n)|det(g) = 1, an open subset of O(n).8. M × N ⊂ E × V, where M and N are imbedded submanifolds of E andV respectively. The reader should verify this by constructing appropriatecharts for E × V by taking “tensor” products of the charts for E and Vassociated to M and N respectively.

Page: 84 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 91: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

13.2 Tangent Planes and Spaces 85

9. The n – dimensional torus,

Tn := z ∈ Cn : |zi| = 1 for i = 1, 2, . . . , n = (S1)n,

where z = (z1, . . . , zn) and |zi| =√zizi. This follows by induction us-

ing items 3. and 8. Alternatively apply Theorem 13.6 with F (z) :=(|z1|2 − 1, . . . , |zn|2 − 1

).

Lemma 13.8. Suppose g ∈ GL(n,R) and A ∈ gl(n,R), then

det ′(g)A = det(g)tr(g−1A). (13.3)

Proof. By definition we have

det ′(g)A =d

dt|0 det(g + tA) = det(g)

d

dt|0 det(I + tg−1A).

So it suffices to prove ddt |0 det(I + tB) = tr(B) for all matrices B. If B is upper

triangular, then det(I + tB) =∏ni=1(1 + tBii) and hence by the product rule,

d

dt|0 det(I + tB) =

n∑i=1

Bii = tr(B).

This completes the proof because; 1) every matrix can be put into upper trian-gular form by a similarity transformation, and 2) “det” and “tr” are invariantunder similarity transformations.

Definition 13.9. Let E and V be two finite dimensional vector spaces andMd ⊂ E and Nk ⊂ V be two imbedded submanifolds. A function f : M → Nis said to be smooth if for all charts x ∈ A(M) and y ∈ A(N) the functiony f x−1 : Rd → Rk is smooth.

Exercise 13.1. Let Md ⊂ E and Nk ⊂ V be two imbedded submanifolds asin Definition 13.9.

1. Show that a function f : Rk → M is smooth iff f is smooth when thoughtof as a function from Rk to E.

2. If F : E → V is a smooth function such that F (M ∩D(F )) ⊂ N, show thatf := F |M : M → N is smooth.

3. Show the composition of smooth maps between imbedded submanifolds issmooth.

Proposition 13.10. Assuming the notation in Definition 13.9, a function f :M → N is smooth iff there is a smooth function F : E → V such that f = F |M .

Proof. (Sketch.) Suppose that f : M → N is smooth, m ∈ M and n =f(m). Let z be as in Definition 13.2 and w be a chart on N such that n ∈ D(w).By shrinking the domain of z if necessary, we may assume that R(z) = U ×Wwhere U ⊂o Rd and W ⊂o RN−d in which case z(M ∩ D(z)) = U × 0 . Forξ ∈ D(z), let F (ξ) := f(z−1(z<(ξ), 0)) with z = (z<, z>) as in Notation 13.3.Then F : D(z)→ N is a smooth function such that F |M∩D(z) = f |M∩D(z). Thefunction F is smooth. Indeed, letting x = z<|D(z)∩M ,

w< F = w< f(z−1(z<(ξ), 0)) = w< f x−1 (z<(·), 0)

which, being the composition of the smooth maps w< f x−1 (smooth byassumption) and ξ → (z<(ξ), 0), is smooth as well. Hence by definition, Fis smooth as claimed. Using a standard partition of unity argument (which weomit), it is possible to piece this local argument together to construct a globallydefined smooth function F : E → V such that f = F |M .

Definition 13.11. A function f : M → N is a diffeomorphism if f is smoothand has a smooth inverse. The set of diffeomorphisms f : M → M is a groupunder composition which will be denoted by Diff(M).

13.2 Tangent Planes and Spaces

Definition 13.12. Given an imbedded submanifold M ⊂ E and m ∈ M, letτmM ⊂ E denote the collection of all vectors v ∈ E such there exists a smoothpath σ : (−ε, ε) → M with σ(0) = m and v = d

ds |0σ(s). The subset τmM iscalled the tangent plane to M at m and v ∈ τmM is called a tangent vector,see Figure 13.3.

Fig. 13.3. Tangent plane, τmM, to M at m and a vector, v, in τmM.

Page: 85 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 92: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

86 13 Manifold Primer

Theorem 13.13. For each m ∈ M, τmM is a d – dimensional subspace of E.If z : E → RN is as in Definition 13.2, then τmM = Nul(z′>(m)). If x is achart on M such that m ∈ D(x), then

d

ds|0x−1(x(m) + sei)

di=1

is a basis for τmM, where eidi=1 is the standard basis for Rd.

Proof. Let σ : (−ε, ε) → M be a smooth path with σ(0) = m and v =dds |0σ(s) and z be a chart (for E) around m as in Definition 13.2 such thatx = z<. Then z>(σ(s)) = 0 for all s and therefore,

0 =d

ds|0z>(σ(s)) = z′>(m)v

which shows that v ∈ Nul(z′>(m)), i.e. τmM ⊂ Nul(z′>(m)). In preparation forthe proof of the converse inclusion let us also notice that

d

ds|0z (σ (s)) =

d

ds|0 (z<(σ(s)), z>(σ(s))) = (z′< (p)σ′ (0) , 0) = (z′< (p) v, 0) .

Now suppose that v ∈ Nul(z′>(m)). We wish to find σ (s) ∈ M withσ (0) = m and σ′ (0) = v. According to the previous equation we must havethat d

ds |0z (σ (s)) = (z′< (m) v, 0) = z′ (m) v which suggests we define

σ (s) = z−1 (z (m) + sz′ (m) v) = z−1 (z (m) + s (z′< (m) v, 0)) = x−1 (x (m) + sz′< (m) v) .

For this choice of σ (s) we have z (σ (s)) = z (m) + sz′ (m) v and thereforez′ (m)σ′ (0) = z′ (m) v from which it follows that σ′ (0) = v as desired. Wehave now shown Nul(z′>(m)) ⊂ τmM which completes the proof that τmM =Nul(z′>(m)).

Since z′<(m) : τmM = Nul(z′>(m)) → Rd is a linear isomorphism2, theabove argument also shows

d

ds|0x−1(x(m) + sw) = (z′<(m)|τmM )

−1w ∈ τmM ∀ w ∈ Rd.

In particular it follows that

dds|0x−1(x(m) + sei)di=1 = (z′<(m)|τmM )

−1eidi=1

is a basis for τmM, see Figure 13.4 below.The following proposition is an easy consequence of Theorem 13.13 and the

proof of Theorem 13.6.

2 The reader should convice herself of this fact if it is not evident.

Proposition 13.14. Suppose that M is an imbedded submanifold constructedas in Theorem 13.6. Then τmM = Nul(F ′(m)) .

Exercise 13.2. Show:

1. τmM = E, if M is an open subset of E.2. τgGL(n,R) = gl(n,R), for all g ∈ GL(n,R).3. τmS

N−1 = m⊥ for all m ∈ SN−1.4. Let sl(n,R) be the traceless matrices,

sl(n,R) := A ∈ gl(n,R)| tr(A) = 0. (13.4)

ThenτgSL(n,R) = A ∈ gl(n,R)|g−1A ∈ sl(n,R)

and in particular τISL(n,R) = sl(n,R).5. Let so (n,R) be the skew symmetric matrices,

so (n,R) := A ∈ gl(n,R)|A = −Atr.

ThenτgO(n) = A ∈ gl(n,R)|g−1A ∈ so (n,R)

and in particular τIO (n) = so (n,R) . Hint: g−1 = gtr for all g ∈ O(n).6. If M ⊂ E and N ⊂ V are imbedded submanifolds then

τ(m,n)(M ×N) = τmM × τnN ⊂ E × V.

It is quite possible that τmM = τm′M for some m 6= m′, with m and m′

in M (think of the sphere). Because of this, it is helpful to label each of thetangent planes with their base point.

Definition 13.15. The tangent space (TmM) to M at m is given by

TmM := m × τmM ⊂M × E.

LetTM := ∪m∈MTmM,

and call TM the tangent space (or tangent bundle) of M. A tangentvector is a point vm := (m, v) ∈ TM and we let π : TM → M denote thecanonical projection defined by π(vm) = m. Each tangent space is made intoa vector space with the vector space operations being defined by: c(vm) := (cv)mand vm + wm := (v + w)m.

Exercise 13.3. Prove that TM is an imbedded submanifold of E ×E. Hint:suppose that z : E → RN is a function as in the Definition 13.2. Define D(Z) :=D(z) × E and Z : D(Z) → RN × RN by Z(x, a) := (z(x), z′(x)a). Use Z’s ofthis type to check TM satisfies Definition 13.2.

Page: 86 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 93: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

13.2 Tangent Planes and Spaces 87

Notation 13.16 In the sequel, given a smooth path σ : (−ε, ε) → M, we willabuse notation and write σ′ (0) for either

d

ds|0σ(s) ∈ τσ(0)M

or for

(σ(0),d

ds|0σ(s)) ∈ Tσ(0)M = σ(0) × τσ(0)M.

Also given a chart x = (x1, x2, . . . , xd) on M and m ∈ D(x), let ∂/∂xi|m denotethe element TmM determined by ∂/∂xi|m = σ′(0), where σ(s) := x−1(x(m) +sei), i.e.

∂xi|m = (m,

d

ds|0x−1(x(m) + sei)), (13.5)

see Figure 13.4.

Fig. 13.4. Forming a basis of tangent vectors.

The reason for the strange notation in Eq. (13.5) will be explained afterNotation 13.18. By definition, every element of TmM is of the form σ′ (0) whereσ is a smooth path into M such that σ (0) = m. Moreover by Theorem 13.13,∂/∂xi|mdi=1 is a basis for TmM.

Definition 13.17. Suppose that f : M → V is a smooth function, m ∈ D(f)and vm ∈ TmM. Write

vmf = df(vm) :=d

ds|0f(σ(s)),

where σ is any smooth path in M such that σ′(0) = vm. The function df :TM → V will be called the differential of f.

Notation 13.18 If M and N are two manifolds f : M ×N → V is a smoothfunction, we will write dMf (·, n) to indicate that we are computing the differ-ential of the function m ∈M → f(m,n) ∈ V for fixed n ∈ N.

To understand the notation in (13.5), suppose that f = F x =F (x1, x2, . . . , xd) where F : Rd → R is a smooth function and x is a charton M. Then

∂f(m)

∂xi:=

∂xi|mf = (DiF )(x(m)),

where Di denotes the ith – partial derivative of F. Also notice that

dxj(∂∂xi |m

)= δij so that

dxi|TmM

di=1

is the dual basis of ∂/∂xi|mdi=1

and therefore if vm ∈ TmM then

vm =

d∑i=1

dxi (vm)∂

∂xi|m. (13.6)

This explicitly exhibits vm as a first order differential operator acting on“germs” of smooth functions defined near m ∈M.

Remark 13.19 (Product Rule). Suppose that f : M → V and g : M → End(V )are smooth functions, then

vm(gf) =d

ds|0 [g(σ(s))f(σ(s))] = vmg · f(m) + g(m)vmf

or equivalently

d(gf)(vm) = dg(vm)f(m) + g(m)df(vm).

This last equation will be abbreviated as d(gf) = dg · f + gdf.

Definition 13.20. Let f : M → N be a smooth map of imbedded submanifolds.Define the differential, f∗, of f by

f∗vm = (f σ)′(0) ∈ Tf(m)N,

where vm = σ′(0) ∈ TmM, and m ∈ D(f).

Lemma 13.21. The differentials defined in Definitions 13.17 and 13.20 arewell defined linear maps on TmM for each m ∈ D(f).

Proof. I will only prove that f∗ is well defined, since the case of df issimilar. By Proposition 13.10, there is a smooth function F : E → V, such thatf = F |M . Therefore by the chain rule

f∗vm = (f σ)′(0) :=

[d

ds|0f(σ(s))

]f(σ(0))

= [F ′(m)v]f(m) , (13.7)

Page: 87 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 94: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

88 13 Manifold Primer

Fig. 13.5. The differential of f.

where σ is a smooth path in M such that σ′(0) = vm. It follows from (13.7)that f∗vm does not depend on the choice of the path σ. It is also clear from(13.7), that f∗ is linear on TmM.

Remark 13.22. Suppose that F : E → V is a smooth function and that f :=F |M . Then as in the proof of Lemma 13.21,

df(vm) = F ′(m)v (13.8)

for all vm ∈ TmM , and m ∈ D(f). Incidentally, since the left hand sides of(13.7) and (13.8) are defined “intrinsically,” the right members of (13.7) and(13.8) are independent of the possible choices of functions F which extend f.

Lemma 13.23 (Chain Rules). Suppose that M, N, and P are imbeddedsubmanifolds and V is a finite dimensional vector space. Let f : M → N,g : N → P, and h : N → V be smooth functions. Then:

(g f)∗vm = g∗(f∗vm), ∀ vm ∈ TM (13.9)

andd(h f)(vm) = dh(f∗vm), ∀ vm ∈ TM. (13.10)

These equations will be written more concisely as (g f)∗ = g∗f∗ and d(hf) =dhf∗ respectively.

Proof. Let σ be a smooth path in M such that vm = σ′(0). Then, see Figure13.6,

(g f)∗vm := (g f σ)′(0) = g∗(f σ)′(0)

= g∗f∗σ′(0) = g∗f∗vm.

Similarly,

d(h f)(vm) :=d

ds|0(h f σ)(s) = dh((f σ)′(0))

= dh(f∗σ′(0)) = dh(f∗vm).

Fig. 13.6. The chain rule.

If f : M → V is a smooth function, x is a chart on M , and m ∈ D(f)∩D(x),we will write ∂f(m)/∂xi for df

(∂/∂xi|m

). Combining this notation with Eq.

(13.6) leads to the pleasing formula,

df =

d∑i=1

∂f

∂xidxi, (13.11)

by which we mean

df(vm) =

d∑i=1

∂f(m)

∂xidxi(vm).

Suppose that f : Md → Nk is a smooth map of imbedded submanifolds,m ∈ M, x is a chart on M such that m ∈ D(x), and y is a chart on N suchthat f(m) ∈ D(y). Then the matrix of

f∗m := f∗|TmM : TmM → Tf(m)N

Page: 88 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 95: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

relative to the bases ∂/∂xi|mdi=1 of TmM and ∂/∂yj |f(m)kj=1 of Tf(m)N is

(∂(yj f)(m)/∂xi). Indeed, if vm =∑di=1 v

i∂/∂xi|m, then

f∗vm =

k∑j=1

dyj(f∗vm)∂/∂yj |f(m)

=

k∑j=1

d(yj f)(vm)∂/∂yj |f(m) (by Eq. (13.10))

=

k∑j=1

d∑i=1

∂(yj f)(m)

∂xi· dxi(vm)∂/∂yj |f(m) (by Eq. (13.11))

=

k∑j=1

d∑i=1

∂(yj f)(m)

∂xivi∂/∂yj |f(m).

Example 13.24. Let M = O(n), k ∈ O(n), and f : O(n) → O(n) be defined byf(g) := kg. Then f is a smooth function on O(n) because it is the restrictionof a smooth function on gl(n,R). Given Ag ∈ TgO(n), by Eq. (13.7),

f∗Ag = (kg, kA) = (kA)kg

(In the future we denote f by Lk; Lk is left translation by k ∈ O(n).)

Definition 13.25. A Lie group is a manifold, G, which is also a groupsuch that the group operations are smooth functions. The tangent space, g :=Lie (G) := TeG, to G at the identity e ∈ G is called the Lie algebra of G.

Exercise 13.4. Verify that GL(n,R), SL(n,R), O(n), SO(n) and Tn (see Ex-ample 13.7) are all Lie groups and

Lie (GL(n,R)) ∼= gl(n,R),

Lie (SL(n,R))) ∼= sl(n,R)

Lie (O(n))) = Lie (SO(n))) ∼= so(n,R) and

Lie (Tn)) ∼= (iR)n ⊂ Cn.

See Exercise 13.2 for the notation being used here.

Exercise 13.5 (Continuation of Exercise 13.3). Show for each chart x onM that the function

ϕ(vm) := (x(m), dx(vm)) = x∗vm

is a chart on TM. Note that D(ϕ) := ∪m∈D(x)TmM.

Page 96: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 97: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

14

Integration over Manifolds

Since differential forms are related to signed volumes, in order to performintegration of a k-form over a surface it will be necessary to fix an overall signof the answer. This requires that we equip any manifold we wish to integrate aform over with an “orientation.” The results of the next section are devoted tothis concept.

Warning of notation change: d = dim (M) of the last chapter will be-come n in this chapter. Thus in this chapter, M will be a ndimensional sub-manifold of E. We now extend the notation of k-forms in Definition 8.23 to themanifold setting.

Definition 14.1 (Differential k-form). A 0-form on M is just a function,f : M → R while (for k ∈ N) a differential k-form (ω) on M is an assign-ment;

M 3 p→ ωp ∈ Λk([TpM ]

∗)for all p ∈M.

If ω is a k-form on M and x is chart on M, then γ = x−1 : R (x) → D (x)is parametrization and γ∗ω is a k-form on R (x) . We refer to such a map γ asa local parametrization of M.

Definition 14.2. We say ω is smooth if γ∗ω ∈ Ωk (R (x)) is a smooth k-formon the open set R (x) ⊂ Rn is smooth for all x ∈ A (M) . Let Ωk (M) denotethe smooth k-forms on M.

If we let t = (t1, . . . , tn) be the standard coordinates on Rn. Then continuingthe notation above, if ω ∈ Ωk (M) , there exists ωxJ ∈ C∞ (R (x)) such that

γ∗ω =∑

J:|J|=k

ωxJ (t1, . . . , tn) dtJ

and therefore,

ω = x∗γ∗ω =∑

J:|J|=k

ωxJ (x1, . . . , xn) dxJ on D (x) .

In particular if ω is an n-form on M, we have

ω = ωx[n] (x1, . . . , xn) dx[n] = ωx[n] (x1, . . . , xn) dx1 ∧ · · · ∧ dxn.

Following along the lines of Definition 9.3 we make the following definition.

Definition 14.3. If ω ∈ Ωn (M) and x is a chart on M and γ = x−1, then∫γ

ω =

∫R(x)

γ∗ω =

∫R(x)

ωx[n] (t1, . . . , tn) dt1 . . . dtn.

In other words, we are making the “substitution” x = γ (t) .

14.1 Orientation and Integrating Forms

Definition 14.4. Let M be an n-dimensional submanifold of a Euclidean space,E = RN . We say M is orientable if there exists o ∈ Ωn (M) such that op 6= 0for all p ∈ M. We refer to any such o as an orientation on M and refer to(M,o) as an oriented manifold.

To any chart, x ∈ A (M) we know that

o = oxdx[n] = ox · dx1 ∧ · · · ∧ dxn

with ox ∈ C∞ (D (x) ,R \ 0) .

Definition 14.5. Given a chart, x ∈ A (M) , let sgno (x) := sgn (ox) ∈C∞ (D (x) , ±1) and note that sgno (x) is constant on connected componentsof D (x) . Similarly if γ : D → γ (D) ⊂o M is a parametrization, let

sgno (γ) := sgn(

(γ∗o)[n]

)∈ C∞ (D, ±1) .

Example 14.6. Suppose that M = Γ (g) where g : R2 → R and let o = dx ∧ dy,then γ (t1, t2) = (t1, t2, g (t1, t2)) satisfies γ∗o = dt1 ∧ dt2 and so sgno (γ) = 1.

Definition 14.7. Suppose that (M,o) is an oriented manifold and let γ : D →γ (D) is a parametrization of γ (D) ⊂o M. For ω ∈ Ωnc (γ (D)) , we let∫

(γ,o)

ω :=

∫D

sgno (γ) γ∗ω.

Lemma 14.8. If γ : D → V is a local parametrizations of V ⊂o M, andψ : D → D is a diffeomorphism, then

sgno (γ ψ) = sgno (γ) ψ · sgn(detψ′).

Page 98: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

92 14 Integration over Manifolds

Proof. Let u ∈ C∞(D,R×

)be such that

γ∗o = u (t) dt1 ∧ · · · ∧ dtn.

Then

(γ ψ)∗o = ψ∗ (u (t) dt1 ∧ · · · ∧ dtn) = u (ψ (t)) detψ′ (t) dt1 ∧ · · · ∧ dtn

and hencesgno (γ ψ) = sgno (γ) ψ · sgn (detψ′) .

Theorem 14.9. Suppose that (M,o) is an oriented manifold and let γ : D →γ (D) ⊂ M and γ : D → γ

(D)⊂ M be two parametrizations of γ (D) and

γ(D)

respectively. If ω ∈ Ωnc(γ (D) ∩ γ

(D))

, then∫(γ,o)

ω =

∫(γ,o)

ω.

Proof. By replacing D by γ−1(γ (D) ∩ γ

(D))

and D by

γ−1(γ (D) ∩ γ

(D))

, we may assume that γ (D) = γ(D). We then

have ψ := γ−1 γ : D → D is a diffeomorphism such that γ = γ ψ. We thenhave by the change of variables Theorem 11.21, that∫

(γ,o)

ω =

∫D

sgno (γ) γ∗ω

=

∫D

sgn (detψ′) · sgno (γ) ψ · ψ∗γ∗ω

=

∫D

sgn (detψ′) · ψ∗ [sgno (γ) γ∗ω]

=

∫D

sgno (γ) γ∗ω =

∫(γ,o)

ω.

Corollary 14.10. Suppose that (M, o) be an oriented manifold and ω ∈

Ωnc (M) . Further let γj : Dj → γj (Dj)Nj=1 andγk : Dk → γk

(Dk

)Nk=1

be

two collections of local parametrizations such that supp (ω) is covered by

γj (Dj)Nj=1 and byγk

(Dk

)Nk=1

. Further let hjNj=1 andhk

Nk=1

be par-

titions of unity of supp (ω) subordinate to each of these covers. Then

N∑j=1

∫(γj ,o)

hjω =

N∑k=1

∫(γk,o)

hkω. (14.1)

Proof. Since hj hkω ∈ Ωnc

(γj (Dj) ∩ γk

(Dk

))it follows from Theorem

14.9 that ∫(γj ,o)

hj hkω =

∫(γk,o)

hj hkω.

Summing this equation on j gives,

N∑j=1

∫(γj ,o)

hj hkω =

N∑j=1

∫(γk,o)

hj hkω =

∫(γk,o)

N∑j=1

hj hkω =

∫(γk,o)

hkω

and then summing this equation on k show

N∑k=1

∫(γk,o)

hkω =

N∑k=1

N∑j=1

∫(γj ,o)

hj hkω =

N∑j=1

N∑k=1

∫(γj ,o)

hkhjω

=

N∑j=1

∫(γj ,o)

N∑k=1

hkhjω =

N∑j=1

∫(γj ,o)

hjω

and the proof is complete.

Definition 14.11 (Integration of Forms). If (M,o) is an oriented manifoldand ω ∈ Ωnc (M) , we define∫

(M,o)

ω :=

N∑j=1

∫(γj ,o)

hjω

wherein we are using the notation introduced in Corollary 14.10.

Remark 14.12. The integral as defined is linear. Indeed, if ω1, ω2 ∈ Ωnc (M) and

c ∈ R, choose parametrizations γjNj=1 so that γj (Dj)Nj=1 is an open cover of

K = supp (ω1) ∪ supp (ω2) and let hjNj=1 be an associated partition of unity.Then∫

(γj ,o)

hj [ω1 + cω2] =

∫Dj

sgno (γj) · (γ∗j (hj [ω1 + cω2])

=

∫Dj

[sgno (γj) · γ∗j (hjω1) + csgno (γj) · γ∗j (hjω2)

]=

∫Dj

sgno (γj) · γ∗j (hjω1) + c

∫Dj

sgno (γj) · γ∗j (hjω2)

=

∫(γj ,o)

hjω1 + c

∫(γj ,o)

hjω2.

Page: 92 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 99: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

14.2 Manifolds with Boundary and Boundary Orientations 93

Summing this equation on j then shows∫(M,o)

[ω1 + cω2] =

∫(M,o)

ω1 + c

∫(M,o)

ω2.

14.2 Manifolds with Boundary and BoundaryOrientations

Definition 14.13. Suppose that M ⊂ E is an n-dimensional manifold andΩ ⊂ M is a subset of M. We say that Ω is a submanifold of M withboundary if for each p ∈ Ω there is a chart x ∈ A (M) such that either (seeFigure 14.1);

1. p ∈ D (x) ⊂ Ω,2. x (p) = 0 and

x : Ω ∩ D (x)→ H ∩R (x)

where H = a ∈ Rn : a1 ≤ 0 . If the second case hold we say p is in theboundary of Ω.We further let ∂Ω be the set of points p ∈M for which case 2. holds.

Fig. 14.1. A region, Ω ⊂ M, which is a manifold with boundary. We have alsodepicted an outward pointing normal at the point p ∈ ∂Ω.

Remark 14.14. Continuing the notation above if p ∈ ∂Ω and x is a chartcentered at p as above, then x (q) = (x2 (q) , . . . , xn (q)) defined on D (x) :=∂Ω ∩ D (x) may be used to make ∂Ω into a manifold as well. For the lack oftime we take this fact for granted here.

Definition 14.15. Suppose that Ω is a submanifold of M with boundaryand o is an orientation on M. Then we define an orientation, o∂Ω , on ∂Ωby requiring [o∂Ω ]p = iNpo where Np ∈ TpM is a smoothly varying outwardpointing “normal” vector, see Figure 14.1. It is not hard to show o∂Ω is anowhere vanishing (n− 1)-form on ∂Ω.

14.3 Stoke’s theorem

In this section we will assume that (Mn, o) is an oriented manifold, Ω is asubmanifold ofM with boundary, and o∂Ω be the induced boundary orientation.To be perhaps overly pedantic we also let i∂Ω : ∂Ω →M be the inclusion map,i∂Ω (p) = p for all p ∈ ∂Ω.

Remark 14.16. In the proofs below we will use without further mention thatsgno (γ) is locally constant (constant on connected sets) and hence d [sgno (γ)] =0.

Theorem 14.17 (Local Stoke’s Theorem 1). If x : D (x)→ R (x) is a type1 chart for Ω, γ = x−1, and µ ∈ Ωn−1

c (D (x)) , then∫(Ω,o)

dµ = 0 =

∫(∂Ω,o∂M )

i∗∂Ωµ.

Proof. By assumption, i∗∂Ωµ = 0 on ∂Ω and hence∫(∂Ω,o∂M )

i∗∂Ωµ = 0.

Moreover, ∫(Ω,o)

dµ =

∫R(x)

sgno (γ) γ∗dµ =

∫R(x)

sgno (γ) d [γ∗µ]

=

∫Rnd [sgno (γ) γ∗µ] = 0

wherein we have used Remark 14.16 in the second to last equality and Propo-sition 11.13 for the last one.

Lemma 14.18. Let x : D (x) → R (x) be a type 2 chart for Ω, γ = x−1, andι : Rn−1 → ∂H be defined by

ι (s1, . . . , sn−1) = (0, s1, . . . , sn−1) .

If we let D := ι−1 (R (x)) so that γ ι : D → ∂Ω is a local parametrization,then

sgno∂Ω (γ ι) = sgno (γ) on ∂Ω ∩R (x) . (14.2)

Proof. Let p ∈ ∂Ω ∩R (x) and let N be an outward pointing normal at p,and note (see Figure 14.1) that

x∗N = c1e1 +

n∑j=2

cjej

Page: 93 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 100: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

94 14 Integration over Manifolds

for c1 > 0 and cj ∈ R and therefore

N = γ∗x∗N = c1γ∗e1 +

n∑j=2

cjγ∗ej with c1 > 0.

Hence it follows that((γ ι)∗ o∂Ω

)(e1, . . . , en−1)

= (ι∗γ∗o∂Ω) (e1, . . . , en−1) = (γ∗o∂Ω) (e2, . . . , en)

= o∂Ω (γ∗e2, . . . , γ∗en) = o (N, γ∗e2, . . . , γ∗en)

= c1o (γ∗e1, γ∗e2, . . . , γ∗en) .

From this equation it follows that sgno∂Ω (γ ι) = sgno (γ) at p ∈ ∂Ω ∩ R (x)which proves Eq. (14.2).

Theorem 14.19 (Local Stoke’s Theorem 2). If x : D (x)→ R (x) is a type2 chart for Ω, γ = x−1, and µ ∈ Ωn−1

c (D (x)) , then∫(Ω,o)

dµ =

∫(∂Ω,o∂M )

i∗∂Ωµ.

Proof. By the definitions, book Exercise 3.2viii (also see Theorem 9.6), andLemma 14.18 we find,∫

(Ω,o)

dµ =

∫R(x)∩H

sgno (γ) · γ∗ (dµ)

=

∫R(x)∩H

d (sgno (γ) γ∗µ)

=

∫ι−1(R(x)∩H)

sgno (γ) ι∗γ∗µ

=

∫D

sgno∂Ω (γ ι) · (γ ι)∗ µ =

∫(∂Ω,o∂Ω)

µ.

Theorem 14.20 (Global Stoke’s Theorem). Let (Ωn, o) be an orientedmanifold with boundary and µ ∈ Ωn−1

c (Ω) . Further let o∂Ω be the inducedboundary orientation, then∫

(Ω,o)

dµ =

∫(∂Ω,o∂Ω)

i∗∂Ωµ = “

∫(∂Ω,o∂Ω)

µ.”

Proof. Cover supp (ω) by γj (Dj)Nj=1 and let hjNj=1 be a partition ofunity relative to this cover. Then

µ =

N∑j=1

hjµ and dµ =

N∑j=1

d (hjµ)

and so ∫(Ω,o)

dµ =

N∑j=1

∫(Ω,o)

d (hjµ) .

However, by Theorems 14.17 and 14.19, for each j we have∫(Ω,o)

d (hjµ) =

∫(∂Ω,o∂Ω)

i∗∂Ω [hjµ] =

∫(∂Ω,o∂Ω)

hj i∂Ω · i∗∂Ωµ.

Summing this equation on j then shows

∫(Ω,o)

dµ =

∫(Ω,o)

d

N∑j=1

hjµ

=

N∑j=1

∫(Ω,o)

d (hjµ)

=

N∑j=1

∫(∂Ω,o∂Ω)

hj i∂Ω · i∗∂Ωµ =

∫(∂Ω,o∂Ω)

N∑j=1

hj i∂Ω · i∗∂Ωµ

=

∫(∂Ω,o∂Ω)

i∗∂Ωµ.

14.4 Example of Green’s Theorem

In order to visually understand the orientation of the boundary it is useful tobe able to make use of the “right hand” rule which justify in the next remark.

Remark 14.21 (Right Hand Rule). If v1, v2,v3 is an orthonormal basis, for R3,then det [v1|v2|v3] = 1 provided v1, v2,v3 satisfies the right hand rule. Tounderstand this we choose a one parameter family of rotations, R (t) , so thatR (0) = I and R (1) ei = vi for i ∈ [3] . Then detR (t) ∈ ±1 is continuous andhence detR (1) = detR (0) = 1. A similar statement holds for with R3 replacedby R2. In this case we say v1, v2 satisfies the right hand rule if v1, v2, e3satisfies the right hand rule where we now identify v ∈ R2 with (v, 0) ∈ R3.

Let us further remark that any continuous deformation ofv1 (t) , v2 (t) , v3 (t) such that v1 (t) , v2 (t) , v3 (t) remains linearly in-dependent has the property that

Page: 94 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 101: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

sgn((det [v1 (t) |v2 (t) |v3 (t)]) is constant in t.

Using these remarks it is possible to find sgn(det [v1|v2|v3]) by visual inspection!

Let Ω be the region contained in R2 as shown in Figure 14.2 below andµ = f (x, y) dx + g (x, y) dy be a one form defined on a neighborhood of Ω.We equip R2 with the standard orientation, o = dx ∧ dy where (x, y) are thestandard coordinates on R2. Then by Stoke’s theorem,∫

∂Ω

µ =

∫Ω

dµ (14.3)

where ∂Ω is as described and oriented as in Figure 14.2.

C

C1

C2

C3

Ω

pNpNq

TpTq

q

Fig. 14.2. The region Ω whose oriented boundaries are shown as C1 ∪ C2 ∪ C3 ∪ C.The orientation on the boundaries are determined by finding the outward pointingnormals and then using the right hand rule.

Sincedµ = (gx − fy) dx ∧ dy

Eq. (14.3) takes on the familiar form,∫∂Ω

(fdx+ gdy) =

∫Ω

(gx − fy) dx ∧ dy

where in this case∫∂Ω

(fdx+ gdy) =

∫C

(fdx+ gdy) +

3∑j=1

∫Cj

(fdx+ gdy)

Page 102: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 103: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

15

Final Exercises

This chapter compises your last assignment which is due at the time of theFinal Exam. You should work on these problems alone.

15.1 Metric Topology

Exercise 15.1 (Why not Q?). Let Q be the rational number contained inthe real numbers R and let d be the metric on R defined in the usual way byd (x, y) := |y − x| .

1. Show the function f0 : Q→ 0, 1 ⊂ Q defined for q ∈ Q by

f0 (q) =

1 if q >

√2

0 if q <√

2.

is continuous relative to the metric d restricted to Q.2. Show the function f : R→ 0, 1 defined for x ∈ R by

f (q) =

1 if x >

√2

0 if x ≤√

2.

is not continuous relative to the metric d on R. [Notice that f0 = f on Q!]

15.2 Exact ODEs

In this subsection, let M,N be two smooth real valued function on R2. We aregoing to be considering the ODE

dy

dx= y′ (x) = −M (x, y (x))

N (x, y (x))(15.1)

for a function y : R→ R. Formally y solves Eq. (15.1) provided,

“N (x, y) dy +M (x, y) dx = 0.”

The next few exercises explore this idea in more precise detail.

Exercise 15.2 (Constants of motion). Let (u, v) denote the standard co-ordinates on R2 (i.e. u (a, b) = a and v (a, b) = b for all (a, v) ∈ R2) and letµ ∈ Ω1

(R2)

be defined by

µ = M (u, v) du+N (u, v) dv.

1. Show if y : R→ R satisfies Eq. (15.1), then the path, σ (x) = (x, y (x)) ∈ R2

satisfied µ(σ′ (x)σ(x)

)= 0 for all x ∈ R.

2. Further if µ = df for some f ∈ C∞(R2)

(i.e. µ is exact), then x →f (x, y (x)) is constant if y satisfies Eq. (15.1).

Exercise 15.3 (Integrating Factors). Let ϕ : R2 → R be another smoothfunction. Show;

1. the one form µ is closed iff Nu −Mv = 0.2. The one form (ϕµ) is closed iff

ϕuN + ϕNu − ϕvM − ϕMv = 0

We say ϕ is an integrating factor for µ if ϕµ is closed.3. If ϕ is an integrating factor for µ then by a Poincare lemma ϕµ is exact, i.e.ϕµ = df for some f ∈ C∞

(R2). For this f show again that x→ f (x, y (x))

is constant if y (·) satisfies Eq. (15.1).

Exercise 15.4 (Illustrative Example). Let M (u, v) = 3uv − v2, N (u, v) =u (u− v) , and

µ = Mdu+Ndv =(3uv − v2

)du+ u (u− v) dv.

1. Show µ is not closed and hence not exact.2. Show ϕ (u, v) = u is an integrating factor for µ.3. Find an explicit function, f (u, v) , such that uµ = df.

15.3 Integration on Manifolds Exercises

For these exercises let (x, y, z) be the standard coordinates on R3, i.e.x (a, b, c) = a, y (a, b, c) = b, and z (a, b, c) = c. Further let

Page 104: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

98 15 Final Exercises

U = (0,∞)× (0, 2π)× (0, π) ,

V = R3 \ (0, b, c) : b ≥ 0 and c ∈ R ,

and f : U → V be the diffeomorphism (see Figure 15.1) defined by

f (r, θ, ϕ) = r (sinϕ · cos θ, sinϕ · sin θ, cosϕ)tr

=

r sinϕ · cos θr sinϕ · sin θr cosϕ

for (r, θ, ϕ) ∈ U.

r

ϕθ

ϕ

θ

r

x

y

z(x, y, z)

(x, y, 0)

f

2ππ

U

(r, θ, ϕ)

Fig. 15.1. This picture depicts the spherical parametrization of (most) of R3.

Convention: we equip R3 with the standard orientation, o = dx∧ dy ∧ dz.

Remark 15.1. In working the exercises below, many of the calculations you willneed you have already done in the exercises. Let me remind you of a few resulthere.

1. Recall from Exercise 8.2 (or see Figure 15.2 below) that

det [f ′ (r, θ, ϕ)] = det [fr|fθ|fϕ] |(r,θ,ϕ) = −r2 sinϕ.

2. By Book Exercise 2.3.11;

df1 ∧ df2 ∧ · · · ∧ dfn = det f ′ · dx1 ∧ · · · ∧ dxn

where f = (f1, . . . , fn)tr

and xjnj=1 are the standard coordinates on Rn.3. Also recall that we have shown

f∗ (dg1 ∧ · · · ∧ dgn) = d (g1 f) ∧ d (g2 f) ∧ · · · ∧ d (gn f) .

Putting items 2. and 3. together with gi = xi may be useful in your solutionsbelow.

Exercise 15.5. Let g ∈ C∞c (U) .

1. Show sgno (f) = −1.2. Use Theorem 11.21 to show∫

V

g (x, y, z) dx ∧ dy ∧ dz =

∫U

g (f (r, θ, ϕ)) r2 sinϕdrdθdϕ.

Notice that the right hand side is a multiple-integral not involving differen-tial forms. [You may find Exercise 8.2 helpful here.]

Notation 15.2 For the rest of these problems let ρ > 0 be fixed,

Ω = C0 (ρ) =

(a, b, c) : a2 + b2 + c2 ≤ ρ2⊂ R3 and

∂Ω = ρS2 =

(a, b, c) : a2 + b2 + c2 = ρ2,

so that Ω is the closed ρ-ball which is a submanifold of R3 with the indicatedboundary. We further defined γ : (0, 2π)× (0, π)→ ∂Ω by

γ (θ, ϕ) = f (ρ, θ, ϕ) = ρ (sinϕ · cos θ, sinϕ · sin θ, cosϕ)tr

for (θ, ϕ) ∈ (0, 2π)×(0, π) .

This parametrizes all of ∂Ω up to a “negligible set.”

ϕ

θ

x

y

fr fθ

fϕρ

f(ρ, θ, ϕ)

z

Fig. 15.2. The solid ρ-ball with boundary. From this picture one sees thatfr, fθ, fϕ forms an orthogonal set with ‖fr‖ = 1, ‖fθ‖ = ρ sinϕ, and ‖fϕ‖ =ρ. Moreover, fr, fθ, fϕ satisfies the left had rule and therefore it follows thatdet [fr|fθ|fϕ] |(r,θ,ϕ) = −ρ2 sinϕ.

Exercise 15.6. Show sgno∂Ω (γ) = −1. Hint: notice that fr (ρ, θ, ϕ) is the out-ward pointing normal vector at γ (θ, ϕ) = f (ρ, θ, ϕ) ∈ ∂Ω.

Page: 98 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 105: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Exercise 15.7. Suppose that F = (F1, F2, F3) : R3 → R3 is a smooth functionand µ ∈ Ω2

(R3)

is defined by

µ = iF (dx ∧ dy ∧ dz) = F1 y ∧ dz − F2 dx ∧ dz + F3 dx ∧ dy.

1. Show

µ (γθ (θ, ϕ) , γϕ (θ, ϕ)) = − (F (γ (θ, ϕ)) · γ (θ, ϕ)) ρ sinϕ.

or equivalently, viewing (θ, ϕ) as the standard coordinate functions on R2,

γ∗µ = − (F (γ (θ, ϕ)) · γ (θ, ϕ)) ρ sinϕ · dθ ∧ dϕ.

Hints: write out µ (γθ, γϕ) = (dx ∧ dy ∧ dz) (F (γ) , γθ, γϕ) and then ob-serve that

F (γ)− (F (γ) · fr) fr ∈ span γθ, γϕ .

2. Ignoring the technicality that γ misses negligible subset of ∂Ω, show∫(∂Ω,o∂Ω)

µ = ρ

∫(0,2π)

∫(0,π)

dϕ sinϕ · F (γ (θ, ϕ)) · γ (θ, ϕ) . (15.2)

Notation 15.3 For the remaining two problems, let

F (x, y, z) =(x2 + y2 + z2

)(x, y, z) and

µ = iF (dx ∧ dy ∧ dz)=(x2 + y2 + z2

)[x dy ∧ dz − y dx ∧ dz + z dx ∧ dy] . (15.3)

Exercise 15.8. For µ as in Eq. (15.3), compute explicitly the value of thesurface integral,

∫(∂Ω,o∂Ω)

µ.

Exercise 15.9 (A Verification of Stoke’s Theorem). Again suppose thatµ is as in Eq. (15.3).

1. Using Exercise 8.12 or otherwise, find dµ.2. Then with the aid of Exercise 15.5 find

∫(Ω,o)

dµ. [As a check you should

compare your answer with Exercise 15.8.]

Page 106: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 107: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

Part V

Appendices

Page 108: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 109: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

A

*More metric space results

This appendix is optional for sure!

A.1 Compactness in Euclidean Spaces

Definition A.1 (Pre-compact). A subset A ⊂ X is precompact if A iscompact.

Example A.2. Suppose that F ⊂ X is an unbounded set, i.e. for all n ∈ Nthere exists zn ∈ F such that d (x, zn) ≥ n. If there were a subsequence,wk := znk

∞k=1 such that w = limk→∞ wk existed in X, then we would have

nk ≤ d (x,wk) → d (x,w) < ∞ which is clearly impossible. This shows thatcompact sets must be bounded.

Example A.3. Suppose that F ⊂ X is not closed. Then there exists zn∞n=1 ⊂F such that z := limn→∞ zn /∈ F. Moreover, although every subsequence ofzn∞n=1 is convergent, they all still converge to z /∈ F. This shows that acompact set must be closed.

Lemma A.4 (Bolzano–Weierstrass property for CD). Let D ∈ N. Everybounded sequence, z (n)∞n=1 ⊂ CD, has a convergent subsequence.

Proof. By assumption there exists M < ∞ such that ‖z (n)‖ =d (z (n) , 0) ≤M for all n ∈ N. Writing z (n) = (z1 (n) , . . . , zD (n)) ∈ CD. Since|zi (n)| ≤ ‖z (n)‖ it follows that zi (n)∞n=1 is a bounded sequence in C. Henceby the Bolzano–Weierstrass property for C we replace z (n) by a subsequencez (nk) such that limk→∞ z (nk) = z1 exists. We may now replace the original zby this new subsequence and then find a further subsequence z (nk) such thatlimk→∞ zi (nk) = zi exists for i = 1, 2. We may continue this way inductivelyto find a subsequence such that limk→∞ zi (nk) = zi exists for all 1 ≤ i ≤ D. Itthen follows that limk→∞ ‖z − z (nk)‖ = 0 as desires where z := (z1, . . . , zD) .

Proof of Theorem 10.35. In light of Examples A.2 and A.3 we are leftto show that closed and bounded sets are sequentially compact. So let K ⊂ CDbe a closed and bounded set and zn∞n=1 be any sequence in K. Accordingtoe Lemma A.4, zn∞n=1 has a convergent subsequence, wk := znk

∞k=1 . Since

wk ∈ K for all k and K is closed it necessarily follows that limk→∞ wk ∈ Kwhich shows K is sequentially compact.

Example A.5 (Warning!). It is not true that a closed and bounded subset of anarbitrary metric space (X, d) is necessarily sequentially compact. For examplelet Z denote the vector space of continuous functions on [0, 1] with values in Rand for f ∈ Z let ‖f‖ = supt∈[0,1] |f (t)| . Then the set C := fn∞n=0 where

fn (t) = 2(n+2)

0 if t ∈ [0, 2−(n+1)] ∪ [2−n, 1]

t− 2−(n+1) if 2−(n+1) ≤ t ≤ 3 · 2−(n+2)

2−n − t if 3 · 2−(n+2) ≤ t ≤ 2−n.

[So fn (t) is a shark tooth over the interval[2−(n+1), 2−n

].] Notice that ‖fn‖ = 1

for all n so that C is bounded. Moreover ‖fn − fm‖ = 1 for all m 6= n, thereforethere are no convergent subsequence of C. The reader should use this fact tosee that C is closed and bounded but not sequentially compact!

Fig. A.1. Here are the plots of f0 and f1.

A.2 Open Cover Compactness

Example A.6. Suppose that A is an unbounded subset of X. Pick a1 ∈ A andthen choose an∞n=2 inductively so that da1,...,an (an+1) ≥ 1 for all n. This

Page 110: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

104 A *More metric space results

sequence then has the property that d (ak, al) ≥ 1 for all k 6= l and from this itfollows that F := a1, a2, . . . is a closed set. We then define an open cover ofA by taking,

U = F c, Ba1 (1/3) , Ba2 (1/3) , Ba3 (1/3) , . . . .

This cover has no finite subcover. Therefore A can not be open cover compact.Alternatively. Given x ∈ X, the collection U := Bx (n) : n ∈ N is an

open cover of X. So if K is an open cover compact subset of X, there mustexist n1 < n2 < · · · < nl so that

K ⊂ Bx (n1) ∪Bx (n2) ∪ · · · ∪Bx (nl) = Bx (nl) .

This shows K is bounded.

Lemma A.7. Suppose that K ⊂ X is an open cover compact set, then K isclosed.

Proof. We will shows that Kc is open. To this end suppose x ∈ Kc. Thenlet εk := 1

3d (x, k) > 0 for all k ∈ K. It then follows that Bx (εk) ∩ Bk (εk) = ∅for all k ∈ K. As Bk (εk)k∈K is an open cover K, there exists Λ ⊂f K suchthat K ⊂ ∪k∈ΛBk (εk) . If we now let δ := mink∈Λ εk > 0, then

Bx (δ) ∩Bk (εk) ⊂ Bx (εk) ∩Bk (εk) = ∅ for all k ∈ Λ

and thereforeBx (δ) ∩K ⊂ Bx (δ) ∩ [∪k∈KBk (εk)] = ∅.

Proposition A.8 (Optional). Suppose that K ⊂ X is an open cover compactset and F ⊂ K is a closed subset. Then F is open cover compact. If Kini=1

is a finite collections of open cover compact subsets of X then K = ∪ni=1Ki isalso an open cover compact subset of X.

Proof. Let U ⊂ τ be an open cover of F, then U∪F c is an open cover ofK. The cover U∪F c of K has a finite subcover which we denote by U0∪F cwhere U0 ⊂f U . Since F ∩ F c = ∅, it follows that U0 is the desired subcoverof F. For the second assertion suppose U ⊂ τ is an open cover of K. Then Ucovers each compact set Ki and therefore there exists a finite subset Ui ⊂f Ufor each i such that Ki ⊂ ∪Ui. Then U0 := ∪ni=1Ui is a finite cover of K.

Exercise A.1. Suppose f : X → Y is continuous and K ⊂ X is open covercompact, then f(K) is an open cover compact subset of Y. Give an example ofcontinuous map, f : X → Y, and an open cover compact subset K of Y suchthat f−1(K) is not open cover compact.

Exercise A.2 (Extreme value theorem III). Let (X, d) be an open covercompact metric space and f : X → R be a continuous function. Show −∞ <inf f ≤ sup f < ∞ and there exists a, b ∈ X such that f(a) = inf f andf(b) = sup f . Hint: use Exercise A.1 and Theorem ??.

Exercise A.3 (Uniform Continuity). Let (X, d) be an open cover compactmetric space, (Y, ρ) be a metric space and f : X → Y be a continuous function.Show that f is uniformly continuous, i.e. if ε > 0 there exists δ > 0 such thatρ(f(y), f (x)) < ε if x, y ∈ X with d(x, y) < δ.

Exercise A.4. Suppose f : X → Y is continuous and K ⊂ X is open covercompact, then f(K) is an open cover compact subset of Y. Give an example ofcontinuous map, f : X → Y, and an open cover compact subset K of Y suchthat f−1(K) is not open cover compact.

Exercise A.5 (Dini’s Theorem). Let X be an open cover compact metricspace and fn : X → [0,∞) be a sequence of continuous functions such thatfn (x) ↓ 0 as n → ∞ for each x ∈ X. Show that in fact fn ↓ 0 uniformly in x,i.e. supx∈X fn (x) ↓ 0 as n → ∞. Hint: Given ε > 0, consider the open setsVn := x ∈ X : fn (x) < ε.

Definition A.9. A collection F of closed subsets of a metric space (X, d) hasthe finite intersection property if ∩F0 6= ∅ for all F0 ⊂f F .

The notion of open cover compactness may be expressed in terms of closedsets as follows.

Proposition A.10. A metric space X is open cover compact iff every family ofclosed sets F ⊂ 2X having the finite intersection property satisfies

⋂F 6= ∅.

Proof. The basic point here is that complementation interchanges openand closed sets and open covers go over to collections of closed sets with emptyintersection. Here are the details.

(⇒) Suppose that X is open cover compact and F ⊂ 2X is a collection ofclosed sets such that

⋂F = ∅. Let

U = Fc := Cc : C ∈ F ⊂ τ,

then U is a cover of X and hence has a finite subcover, U0. Let F0 = Uc0 ⊂f F ,then ∩F0 = ∅ so that F does not have the finite intersection property.

(⇐) If X is not open cover compact, there exists an open cover U of X withno finite subcover. Let

F = Uc := U c : U ∈ U ,

then F is a collection of closed sets with the finite intersection property while⋂F = ∅.

Page: 104 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 111: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

A.3 Equivalence of Sequential and Open Cover Compactness in Metric Spaces 105

A.3 Equivalence of Sequential and Open CoverCompactness in Metric Spaces

Definition A.11. A metric space (X, d) is ε – bounded (ε > 0) if there existsa finite cover of X by balls of radius ε. We further say (X, d) is totally boundedif it is ε – bounded for all ε > 0.

Remark A.12 (Totally bounded means almost finite). An equivalent way to statethat (X, d) is ε – bounded is to say there exists a finite subset Λ = Λε ⊂f Xsuch that dΛ (x) < ε for all x ∈ X. In other words, (X, d) is ε – bounded iff Xis a finite subset within an ε – error.

Theorem A.13. Let (X, d) be a metric space. The following are equivalent.

(a)X is open cover compact.(b)X is sequentially compact.(c)X is totally bounded and complete.

Proof. The proof will consist of showing that a⇒ b⇒ c⇒ a.(a ⇒ b) We will show that not b ⇒ not a. Suppose there exists

xn∞n=1 ⊂ X which has no convergent subsequence. In this case the setS := xn ∈ X : n ∈ N must be an infinite set as we have already seen finitesets are sequentially compact. For every x ∈ X we must have

ε (x) := lim infn→∞

d (xn, x) > 0

since otherwise there would be a subsequence xnk∞k=1 such that

limk→∞ d (xnk , x) = 0, i.e. limk→∞ xnk = x. We now let Vx := Bx(

12ε (x)

)and observe that xn can be in Vx for only finitely many n – otherwise wewould conclude that lim infn→∞ d (xn, x) ≤ ε (x) /2. From these observations,U := Vx : x ∈ X is an open cover of X with no finite subcover. Indeed, ifΛ ⊂f X, then we must still have xn ∈ ∪x∈ΛVx for only finitely many n and inparticular ∪x∈ΛVx can not cover S.

(b⇒ c) Suppose xn∞n=1 ⊂ X is a Cauchy sequence. By assumption thereexists a subsequence xnk

∞k=1 which is convergent to some point x ∈ X. Since

xn∞n=1 is Cauchy it follows that xn → x as n→∞ showing X is complete.Now for sake of contradiction suppose that X is not totally bounded.

There there exists ε > 0 for which X is not ε – bounded. In particular,U := Bx (ε) : x ∈ X is an open cover of X with no finite subcover. We nowuse this to construct a sequence xn∞n=1 ⊂ X. Choose x1 ∈ X at random, thenchoose x2 ∈ X \Bx1

(ε) , then x3 ∈ X \ [Bx1(ε) ∪Bx2

(ε)] . . .

xn ∈ X \[Bx1 (ε) ∪Bx2 (ε) ∪ · · · ∪Bxn−1 (ε)

], . . . .

The process may be continued indefinitely as U has no finite subcover. Byconstruction we have chosen xn∞n=1 such that dx1,...,xn−1(xn) ≥ ε for all nand therefore d (xk, xl) ≥ ε for all k 6= l. Every subsequence will share thisproperty, i.e. not be Cauchy, and hence can not be convergent.

(c ⇒ a) For sake of contradiction, assume there exists an open cover V =Vαα∈A of X with no finite subcover. Since X is totally bounded for eachn ∈ N there exists Λn ⊂f X such that

X =⋃x∈Λn

Bx(1/n) ⊂⋃x∈Λn

Cx(1/n).

Choose x1 ∈ Λ1 such that no finite subset of V covers K1 := Cx1(1). Since

K1 = ∪x∈Λ2K1 ∩Cx(1/2), there exists x2 ∈ Λ2 such that K2 := K1 ∩Cx2

(1/2)can not be covered by a finite subset of V, see Figure A.2. Continuing this wayinductively, we construct sets Kn = Kn−1∩Cxn(1/n) with xn ∈ Λn such that noKn can be covered by a finite subset of V. Now choose yn ∈ Kn for each n. SinceKn∞n=1 is a decreasing sequence of closed sets such that diam(Kn) ≤ 2/n, itfollows that yn is a Cauchy and hence convergent with

y = limn→∞

yn ∈ ∩∞m=1Km.

Since V is a cover of X, there exists V ∈ V such that y ∈ V. Since Kn ↓ yand diam(Kn) → 0, it now follows that Kn ⊂ V for some n large. But thisviolates the assertion that Kn can not be covered by a finite subset of V.

Fig. A.2. Nested Sequence of cubes.

Corollary A.14. The compact subsets of Rn are the closed and bounded sets.

Page: 105 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 112: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

106 A *More metric space results

Proof. If K is closed and bounded then K is complete (being the closedsubset of a complete space) and K is contained in [−M,M ]n for some positiveinteger M. For δ > 0, let

Λδ = δZn ∩ [−M,M ]n := δx : x ∈ Zn and δ|xi| ≤M for i = 1, 2, . . . , n.

We will show, by choosing δ > 0 sufficiently small, that

K ⊂ [−M,M ]n ⊂ ∪x∈ΛδB(x, ε) (A.1)

which shows that K is totally bounded. Hence by Theorem A.13, K is compact.Suppose that y ∈ [−M,M ]n, then there exists x ∈ Λδ such that |yi − xi| ≤ δfor i = 1, 2, . . . , n. Hence

d2(x, y) =

n∑i=1

(yi − xi)2 ≤ nδ2

which shows that d(x, y) ≤√nδ. Hence if choose δ < ε/

√n we have shows that

d(x, y) < ε, i.e. Eq. (A.1) holds.

A.4 Connectedness

Definition A.15. Let (X, d) be a metric space. Two subset A and B of X areseparated1 if A∩B = ∅ = A∩ B. A set E ⊂ X is disconnected if E = A∪Bwhere A and B are two non-empty separated sets, otherwise E is said to beconnected.

Theorem A.16 (The Connected Subsets of R). The connected subsets ofR are intervals.

Proof. We will break the proof into two parts. First we show if E is con-nected then E is an interval. Then we show if E is disconnected then E is notan interval.

1) Suppose that E ⊂ R is a connected subset and that a, b ∈ E witha < b. If there exists c ∈ (a, b) such that c /∈ E, then A := (−∞, c) ∩ Eand B := (c,∞) ∩ E would be two non-empty separated subsets such thatE = A ∪ B. Hence (a, b) ⊂ E. Let α := inf(E) and β := sup(E) and chooseαn, βn ∈ E such that αn < βn and αn ↓ α and βn ↑ β as n → ∞. By what wehave just shown, (αn, βn) ⊂ E for all n and hence (α, β) = ∪∞n=1(αn, βn) ⊂ E.From this it follows that E = (α, β), [α, β), (α, β] or [α, β], i.e. E is an interval.

1 Notice that separated sets are disjoint. The sets A = (0, 1) and B = [1,∞) aredisjoint but not separated.

2) Now suppose that E is a disconnected subset of R. Then E = A∪B whereA and B are non-empty separated sets and let a ∈ A and b ∈ B. After relabellingA and B if necessary we may assume that a < b. Let p = sup ([a, b] ∩A) . Sincep ∈ A∩ [a, b] it follows that p /∈ B and a ≤ p ≤ b. Since b ∈ B, p 6= b and hencea ≤ p < b.

i) if p /∈ A then p /∈ E and a < p < b which shows E is not an interval.ii) if p ∈ A, then p /∈ B and there exists2 p1 such that a ≤ p < p1 < b and

p1 /∈ B ⊃ B. Again p1 /∈ A (for otherwise p ≥ p1) and so p1 /∈ E and henceagain E is not an interval.

Lemma A.17. Suppose that f : X → Y is a continuous function betweentwo metric spaces (X and Y ) and A and B are separated subsets of Y. Thenα := f−1 (A) and β := f−1 (B) are separated subsets of X. In particular, ifE ⊂ X is connected, then f (E) is connected in Y.

Proof. Since α := f−1 (A) ⊂ f−1(A)

and f−1(A)

is the closed being theinverse image under a continuous function of the closed set A, it follows thatα ⊂ f−1

(A). Thus if x ∈ α ∩ β, then f (x) ∈ A ∩B = ∅ and hence α ∩ β = ∅.

Similarly one shows α ∩ β = ∅ as well.If f (E) disconnected, there exists non-empty separated subsets, A and B

of f (E) , so that f (X) = A ∪ B. The sets α := f−1 (A) and β := f−1 (B) arenow non-empty separated subsets of X. It now follows that α0 := α ∩ E andβ0 := β ∩ E are non-empty sets such that

α0 ∩ β0 ⊂ α ∩ β = ∅ and α0 ∩ β0 ⊂ α ∩ β = ∅

which would imply E is disconnected. Thus if E is connected we must have thatf (E) is connected.

Theorem A.18 (Intermediate Value Theorem). Suppose that (X, d) is aconnected metric space and f : X → R is a continuous map. Then f satisfiesthe intermediate value property. Namely, for every pair x, y ∈ X such thatf (x) < f(y) and c ∈ (f (x) , f(y)), there exists z ∈ X such that f(z) = c.

Proof. By Lemma A.17, f (X) is a connected subset of R. So by TheoremA.16, f (X) is a subinterval of R and this completes the proof.

Lemma A.19. If E ⊂ X is a connected set and E ⊂ A ∪ B where A and Bare two separated sets, then E ⊂ A or E ⊂ B.

Proof. Let α := E ∩ A and β := E ∩ B, then α ∩ β ⊂ A ∩ B = ∅ andsimilarly α ∩ β = ∅. Thus E = α ∪ β with α and β being separated sets. SinceE is connected we must have α = ∅ or β = ∅, i.e. E ⊂ B or E ⊂ A.2 This is because Bc is an open subset of R.

Page: 106 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 113: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

A.5 *More on the closure and related operations 107

Proposition A.20. Suppose that F and G are connected subsets of X suchthat F ∩G 6= ∅, then E = F ∪G is connected in X as well.

Proof. Suppose that E = A ∪ B where A and B are separated sets. Thenfrom Lemma A.19 we know that F ⊂ A or F ⊂ B and similarly G ⊂ A orG ⊂ B. If both F and G are in the same set (say A), then E ⊂ A ⊂ E and Bmust be empty. On the other hand if F ⊂ A and G ⊂ B, then ∅ 6= F∩G ⊂ A∩Bwhich would violate A and B being separated. Thus we have shown there doesnot exists two non-empty separated sets A and B such that E = A ∪B, i.e. Eis connected.

Definition A.21. A subset E of a metric space X is path connected if toevery pair of points x0, x1 ⊂ E there exists σ ∈ C([0, 1], E), such that σ(0) =x0 and σ(1) = x1. We refer to σ as a path joining x0 to x1.

Proposition A.22. Every path connected subset, E, of a metric space X isconnected.

Exercise A.6. Prove Proposition A.22, i.e. if E ⊂ X is path connected thenE is connected.. Hint: for sake of contradiction suppose that A and B are twonon-empty separated subsets of X such that E = A ∪ B and choose a pathconnecting a point in A to a point in B.

Definition A.23. A subset, C, of a vector space, X, is convex if for all a, b ∈ Cthe path,

σ (t) = a+ t (b− a) = (1− t) a+ tb for 0 ≤ t ≤ 1

is contained in C.

Example A.24. Every convex subset, C, of a normed vector space (X, ‖·‖) ispath connected and hence connected.

Exercise A.7. Suppose that (X, ‖·‖) is a normed space, x ∈ X, and R > 0.Show the open and closed balls in X, Bx (R) and Cx (R) , are both convex setsand hence path connected.

Definition A.25. A metric space X is locally path connected if for eachx ∈ X, there is an open neighborhood V ⊂ X of x which is path connected.

Proposition A.26. Let X be a metric space.

1. If X is connected and locally path connected, then X is path connected.2. If X is any connected open subset of Rn, then X is path connected.

Exercise A.8. Prove item 1. of Proposition A.26, i.e. if X is connected andlocally path connected, then X is path connected. Hint: fix x0 ∈ X and letW denote the set of x ∈ X such that there exists σ ∈ C([0, 1], X) satisfyingσ(0) = x0 and σ(1) = x. Then show W is both open and closed.

Exercise A.9. Prove item 2. of Proposition A.26, i.e. if X is any connectedopen subset of Rn, then X is path connected.

A.4.1 Connectedness Problems

Exercise A.10. In this exercise we will work inside the metric space Q withd (x, y) := |x− y| for all x, y ∈ Q. Let a, b ∈ Q with a < b and let

J := [a, b] ∩Q = x ∈ Q : a ≤ x ≤ b .

Show J is disconnected in Q.

Exercise A.11. Suppose a < b and f : (a, b)→ R is a non-decreasing function.Show if f satisfies the intermediate value property (see Theorem A.18), then fis continuous.

Exercise A.12. Suppose −∞ < a < b ≤ ∞ and f : [a, b) → R is a strictlyincreasing continuous function. Using the intermediate value theorem, one seesthat f ([a, b)) is an interval and since f is strictly increasing it must of theform [c, d) for some c ∈ R and d ∈ R with c < d. Show the inverse functionf−1 : [c, d) → [a, b) is continuous and is strictly increasing. In particular ifn ∈ N, apply this result to f (x) = xn for x ∈ [0,∞) to construct the positiventh – root of a real number.

Exercise A.13. Let

X :=

(x, y) ∈ R2 : y = sin(x−1) with x 6= 0∪ (0, 0)

equipped with the relative topology induced from the standard topology on R2.Show X is connected but not path connected.

Remark A.27 (Structure of open sets in R). Let V ⊂ R be an open set. Forx ∈ V, let ax := inf a : (a, x] ⊂ V and bx := sup b : [x, b) ⊂ V . Since V isopen, ax < x < bx and it is easily seen that Jx := (ax, bx) ⊂ V. Moreover ify ∈ V and Jx ∩ Jy 6= ∅, then Jx = Jy. The collection, Jx : x ∈ V , is at mostcountable since we may label each J ∈ Jx : x ∈ V by choosing a rationalnumber r ∈ J. Letting Jn : n < N, with N =∞ allowed, be an enumerationof Jx : x ∈ V , we have V =

∐n<N Jn as desired.

A.5 *More on the closure and related operations

This section is optional reading.

Definition A.28 (Closure). Given a set A contained in a metric space X, letA ⊂ X be the closure of A defined by

A := x ∈ X : ∃ xn ⊂ A 3 x = limn→∞

xn.

That is to say A contains all limit points of A.

Page: 107 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 114: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

108 A *More metric space results

Lemma A.29 (Optional). For any A ⊂ X, then

1. A = A if A is closed.2. A = x : dA (x) = 0 and A is closed.3. A = x ∈ X : A ∩Bx (r) 6= ∅ ∀ r > 0 .4. dA (x) > 0 for all x ∈ Ac if A is closed.

Proof. 1. We always have A ⊂ A. If A is closed we can not leave A bytaking limits and hence A ⊂ A, i.e. A = A if A is closed.

2. Let F := x : dA (x) = 0 which is a closed set since dA is continuous. Ifx ∈ F (i.e. dA (x) = 0), there exists xn ∈ A such that d (x, xn) ≤ 1/n for alln ∈ N. Thus limn→∞ xn = x and so x ∈ A. This shows F ⊂ A. Conversely ifx ∈ A, there exists xn ⊂ A such that limn→∞ xn = x and so

dA (x) = dA

(limn→∞

xn

)= limn→∞

dA (xn) = limn→∞

0 = 0

which shows x ∈ F.2. ⇐⇒ 3. Since A ∩ Bx (r) 6= ∅ happens iff dA (x) < r we see that A ∩

Bx (r) 6= ∅ ∀ r > 0 iff dA (x) < r for all r > 0, i.e. iff dA (x) = 0.4. If A is closed then

A = A = x ∈ X : dA (x) = 0

and therefore Ac = x ∈ X : dA (x) > 0 .

Proposition A.30. If A ⊂ X, then

A = ∩F : A ⊂ F ⊂ X with F closed. (A.2)

That is to say A is the smallest closed set containing A and in particular A isclosed.

Proof. A is closed. By Lemma A.29 to see that A := x ∈ X : dA (x) = 0and hence A is closed because dA is continuous. Alternatively without usingLemma A.29, suppose xn ⊂ A is a convergent sequence and x = limn→∞ xn ∈X. By definition of A, there exists yn ∈ A such that d(xn, yn) ≤ n−1 for all n.Therefore by the triangle inequality,

d (x, yn) ≤ d (x, xn) + d (xn, yn) ≤ d (x, xn) +1

n→ 0 as n→∞

which shows x = limn→∞ yn ∈ A and hence A is closed.Equation (A.2) holds. If F is any closed set containing A and xn ⊂ A

is a sequence converging to a point x ∈ A, it follow that x ∈ F because F isclosed. Thus A ⊂ F and hence Eq. (A.2) holds as A is closed.

Exercise A.14. Suppose that A and B are subsets of a metric space, showA ∪B = A ∪ B.

Exercise A.15. Given an example showing that ∪∞n=1An need not be equal to∪∞n=1An.

Exercise A.16. If A is a non-empty subset of X, then dA = dA.

Definition A.31 (Boundary of a set). The boundary of A is the setbd (A) = A ∩Ac.

Proposition A.32. If A is a subset of a metric space, (X, d) , then bd (A) maybe computed using either;

1. bd (A) = A ∩Ac,2. bd (A) = x ∈ X : dA (x) = 0 = dAc (x) ,3. bd (A) = x ∈ X : Bx (r) ∩A 6= ∅ 6= Bx (r) ∩Ac for all r > 0 , or4. bd (A) = x ∈ X : ∃ xn ⊂ A and yn ⊂ Ac 3 limn→∞ xn = x = limn→∞ yn .3

Proof. Item 1. is the definition of bd (A) . Item 2. now follow from item 1.and the fact that A = x ∈ X : dA (x) = 0 . I leave it to the reader to checkthat 2. =⇒ 3. =⇒ 4. =⇒ 1.

Definition A.33 (Dense / Separable). We say A is dense in X if A = X,i.e. every element x ∈ X is a limit of a sequence of elements from A. A metricspace is said to be separable if it contains a countable dense subset, D.

Definition A.34. Let (X, d) be a metric space and A be a subset of X.

1. The closure of A is the smallest closed set A containing A, i.e.

A := ∩F : A ⊂ F @ X .

(Because of Proposition A.30 this is consistent with Definition A.28 for theclosure of a set in a metric space.)

2. The interior of A is the largest open set Ao contained in A, i.e.

Ao = ∪V ∈ τ : V ⊂ A .

3. A ⊂ X is a neighborhood of a point x ∈ X if x ∈ Ao.4. The accumulation points of A is the set

acc(A) = x ∈ X : V ∩ [A \ x] 6= ∅ for all V ∈ τx.3 So the boundary points of A are those points in x which are “on the boundary ofA.”

Page: 108 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 115: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

A.5 *More on the closure and related operations 109

5. The boundary of A is the set bd(A) := A \Ao.6. A is dense in X if A = X and X is said to be separable if there exists a

countable dense subset of X.

Lemma A.35. Let (X, d) be a metric space and A be a subset of X, then

Ao = x ∈ X : Bx (rx) ⊂ A for some rx > 0 .

Proof. Let V := x ∈ X : Bx (r) ⊂ A for some r = rx > 0 . If Bx (r) ⊂ Aand y ∈ Bx (r) and δ = r−d (x, y) , then By (δ) ⊂ Bx (r) ⊂ A which shows thaty ∈ V, i.e. Bx (r) ⊂ V. Thus we may write

V = ∪Bx (r) : Bx (r) ⊂ A .

This shows that V is an open subset of A. Moreover if W is another open subsetof A, then

W = ∪Bx (r) : Bx (r) ⊂W ⊂ ∪Bx (r) : Bx (r) ⊂ A = V

so that V is the largest open subset contained in A. This completes the proof.

Remark A.36. The relationships between the interior and the closure of a setare:

(Ao)c =⋂V c : V ∈ τ and V ⊂ A =

⋂C : C is closed C ⊃ Ac = Ac

and similarly, (A)c = (Ac)o.

Definition A.37. A subset A ⊂ X is a neighborhood of x if there exists anopen set V ⊂o X such that x ∈ V ⊂ A. We will say that A ⊂ X is an openneighborhood of x if A is open and x ∈ A.

Example A.38. Let x ∈ X and δ > 0, then Cx (δ) and Bx (δ)c

are closed subsetsof X. For example if yn∞n=1 ⊂ Cx (δ) and yn → y ∈ X, then d (yn, x) ≤ δ forall n and using Corollary 10.11 it follows d (y, x) ≤ δ, i.e. y ∈ Cx (δ) . A similarproof shows Bx (δ)

cis closed, see Exercise ??.

Exercise A.17 (Completeness). Let (X, d) be a complete metric space. LetA ⊂ X be a subset of X viewed as a metric space using d|A×A. Show that(A, d|A×A) is complete iff A is a closed subset of X.

Exercise A.18. Let (X, ‖·‖) be a normed space and d (x, y) := ‖y − x‖ . Show;

1. Cx (r)o

= Bx (r) ,2. Bx (r) = Cx (r) ,

3. bd (Cx (r)) = bd (Cx (r)) = y ∈ X : ‖y − x‖ = r .

Example A.39 (Words of Caution). Let (X, d) be a metric space. It is alwaystrue that Bx(ε) ⊂ Cx(ε) since Cx(ε) is a closed set containing Bx(ε). However,it is not always true that Bx(ε) = Cx(ε). For example let X = 1, 2 andd(1, 2) = 1, then B1(1) = 1 , B1(1) = 1 while C1(1) = X. For anothercounterexample, take

X =

(x, y) ∈ R2 : x = 0 or x = 1

with the usually Euclidean metric coming from the plane. Then

B(0,0)(1) =

(0, y) ∈ R2 : |y| < 1,

B(0,0)(1) =

(0, y) ∈ R2 : |y| ≤ 1, while

C(0,0)(1) = B(0,0)(1) ∪ (1, 0) .

Exercise A.19. If D is a dense subset of a metric space (X, d) and E ⊂ X isa subset such that to every point x ∈ D there exists xn∞n=1 ⊂ E with x =limn→∞ xn, then E is also a dense subset of X. If points in E well approximateevery point in D and the points in D well approximate the points in X, thenthe points in E also well approximate all points in X.

Exercise A.20. Suppose (X, d) is a metric space which contains an uncount-able subset Λ ⊂ X with the property that there exists ε > 0 such thatd (a, b) ≥ ε for all a, b ∈ Λ with a 6= b. Show that (X, d) is not separable.

Exercise A.21 (Intermediate value theorem). Suppose that −∞ < a <b <∞ and f : [a, b]→ R is a continuous function such that f (a) ≤ f (b) . Showfor any y ∈ [f (a) , f (b)] , there exists a c ∈ [a, b] such that f (c) = y.4 Hint:Let S := t ∈ [a, b] : f (t) ≤ y and let c := sup (S) .

Exercise A.22 (Inverse Function Theorem I). Let f : [a, b] → [c, d] be astrictly increasing (i.e. f (x1) < f (x2) whenever x1 < x2) continuous functionsuch that f (a) = c and f (b) = d. Then f is bijective and the inverse function,g := f−1 : [c, d]→ [a, b] , is strictly increasing and is continuous.

4 The same result holds for y ∈ [f (b) , f (a)] if f (b) ≤ f (a) – just replace f by −fin this case.

Page: 109 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 116: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are
Page 117: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

B

Smoothing by Convolution

Our first priority is to consider the metric space X = J = [a, b] for some−∞ < a < b < ∞. For the remainder of this section, Z will be used to denotea Banach space.

Definition B.1 (Convolution). For f, g ∈ C (R) (or more generally Riemannintegrable) with either f or g having compact support, we define the convolu-tion of f and g by

f ∗ g (x) =

∫Rf(x− y)g(y)dy =

∫Rf(y)g(x− y)dy.

[The forms given above are equal by a simple change of variables argument.]We will also use this definition when one of the functions, either f or g, takesvalues in a Banach space Z.

Remark B.2. If g ∈ C (R, [0,∞)) has compact support and satisfies∫R g (x) dx,

then

f ∗ g (x) =

∫Rf(x− y)g(y)dy

is an average of f in a neighborhood of x. It turns out that in general averagingoperations like this tend to smooth out functions. The next few exercises aremeant to give you a feeling for the smoothing properties of convolutions.

Exercise B.1. For ε > 0, let fε (x) := 12ε1[−ε,ε] (x) so that

fε ∗ g (x) =1

∫ ε

−εg (x− y) dy =

1

∫ x+ε

x−εg (y) dy

is the uniform average of g over the interval [x− ε, x+ ε] . For ω ∈ R letχω (x) := eiωx and show

fε ∗ χω =sinωε

ωεχω.

Notice that fε ∗ χω → 0 as |ω| → ∞ so that the convolutions suppresses highfrequency signals. Also notice that limε↓0 fε ∗ χω = χω – this is a special caseof Lemma B.3.

Exercise B.2. Let fε be as in Exercise B.1. Assume that ε ∈ (0, 1/2) , showfε ∗ f1 is an even function determined by

fε ∗ f1 (x) =

1 if 0 ≤ x ≤ 1− ε1+ε−x

2ε if 1− ε ≤ x ≤ 1 + ε0 if x ≥ 1 + ε

.

Exercise B.3 (Convolutions are as smooth as their smoothest factor.).Let ϕ : R→ C be a C∞ – function with compact support and f be a continuousfunction on R. Show ϕ ∗ f is an infinitely differentiable function and

dn

dxnϕ ∗ f (x) = ϕ(n) ∗ f (x) for all n ∈ N.

Hint: verify that it is permissible to differentiate past the integral.

Lemma B.3 (Approximate δ – sequences). Suppose that qn∞n=1 is a se-quence non-negative continuous real valued functions on R with compact supportthat satisfy ∫

Rqn (x) dx = 1 and (B.1)

limn→∞

∫|x|≥ε

qn (x) dx = 0 for all ε > 0. (B.2)

If f ∈ BC(R, Z), then

qn ∗ f (x) :=

∫Rqn(y)f(x− y)dy

converges to f uniformly on compact subsets of R.

Proof. Let x ∈ R, then because of Eq. (B.1),

‖qn ∗ f (x)− f (x)‖ =

∥∥∥∥∫Rqn(y) (f(x− y)− f (x)) dy

∥∥∥∥≤∫Rqn(y) ‖f(x− y)− f (x)‖ dy.

Page 118: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

112 B Smoothing by Convolution

Let M = sup ‖f (x)‖ : x ∈ R . Then for any ε > 0, using Eq. (B.1),

‖qn ∗ f (x)− f (x)‖ ≤∫|y|≤ε

qn(y) ‖f(x− y)− f (x)‖ dy

+

∫|y|>ε

qn(y) ‖f(x− y)− f (x)‖ dy

≤ sup|w|≤ε

‖f(x+ w)− f (x)‖+ 2M

∫|y|>ε

qn(y)dy.

So if K is a compact subset of R (for example a large interval) we have

supx∈K‖qn ∗ f (x)− f (x)‖

≤ sup|w|≤ε, x∈K

‖f(x+ w)− f (x)‖+ 2M

∫‖y‖>ε

qn(y)dy

and hence by Eq. (B.2),

lim supn→∞

supx∈K‖qn ∗ f (x)− f (x)‖

≤ sup|w|≤ε, x∈K

‖f(x+ w)− f (x)‖ .

This finishes the proof since the right member of this equation tends to 0 asε ↓ 0 by uniform continuity of f on compact subsets of R.Exercise B.4 (Smooth approximate δ – functions). Let ϕ : R→[0,∞) bethe function in Remark ?? and set ψ (x) = 1

cϕ (x) where

c :=

∫Rϕ (x) dx.

Show the functions, ψn (x) := nψ (nx) for n ∈ N form an approximate δ –sequence.

We are going to introduce another approximate δ – sequence. Let qn :R→[0,∞) be defined by

qn (x) :=1

cn(1− x2)n1|x|≤1 where cn :=

∫ 1

−1

(1− x2)ndx. (B.3)

Figure B.1 displays the key features of the functions qn.The key observations we will need for the proof below are for all ε ∈ (0, 1) ,∫ 1

0

(1− x2)ndx ≥∫ ε

0

(1− x2)ndx ≥∫ ε

0

x

ε(1− x2)ndx and∫ 1

ε

(1− x2)ndx ≤∫ 1

ε

x

ε(1− x2)ndx

wherein we have used xε ≤ 1 if x ∈ [0, ε] and x

ε ≥ 1 if x ∈ [ε, 1] .

Fig. B.1. A plot of q1, q50, and q100. The most peaked curve is q100 and the least isq1. The total area under each of these curves is one.

Lemma B.4. The sequence qn∞n=1 is an approximate δ – sequence, i.e. theysatisfy Eqs. (B.1) and (B.2).

Proof. By construction, qn ∈ Cc (R, [0,∞)) for each n and Eq. B.1 holds.Since ∫

|x|≥εqn (x) dx =

2∫ 1

ε(1− x2)ndx

2∫ ε

0(1− x2)ndx+ 2

∫ 1

ε(1− x2)ndx

≤∫ 1

εxε (1− x2)ndx∫ ε

0xε (1− x2)ndx

=(1− x2)n+1|1ε(1− x2)n+1|ε0

=(1− ε2)n+1

1− (1− ε2)n+1→ 0 as n→∞,

the proof is complete.

Notation B.5 Let Z+ := N∪0 and for x ∈ Rd and α ∈ Zd+ let xα =∏di=1 x

αii

and |α| =∑di=1 αi. A polynomial on Rd with values in Z is a function p : Rd →

Z of the form

p (x) =∑

α:|α|≤N

pαxα with pα ∈ Z and N ∈ Z+.

If pα 6= 0 for some α such that |α| = N, then we define deg(p) := N to bethe degree of p. If Z is a complex Banach space, the function p has a naturalextension to z ∈ Cd, namely p(z) =

∑α:|α|≤N pαz

α where zα =∏di=1 z

αii .

Page: 112 job: 150BNotes macro: svmonob.cls date/time: 5-Mar-2020/15:14

Page 119: Math 150B Di erential Geometrybdriver/150B_W2020/Lecture Notes...0.9 Homework 9 = Take Home Final Exam: Due Tuesday March 17 at 8AM These problems are part of your Final Exam and are

The reader asked to prove the following proposition in Exercise B.5 below.

Proposition B.6. Suppose that f ∈ L1loc(Rd,m) and ϕ ∈ C1

c

(Rd), then f ∗ϕ ∈

C1(Rd)

and ∂i(f∗ϕ) = f∗∂iϕ. Moreover if ϕ ∈ C∞c(Rd)

then f∗ϕ ∈ C∞(Rd).

Exercise B.5. If f ∈ L1loc(Rd,m) and ϕ ∈ C1

c

(Rd), then f ∗ ϕ ∈ C1

(Rd)

and

∂i(f ∗ ϕ) = f ∗ ∂iϕ. Moreover if ϕ ∈ C∞c(Rd)

then f ∗ ϕ ∈ C∞(Rd).

Exercise B.6. Show C∞c(Rd)

is dense in Lp(Rd,m

)for any 1 ≤ p <∞.

Lemma B.7. Given a rectangle R in Rd, say R = [a1, b1)× · · · × [an, bn), thenthere exists fk ∈ C∞c

(Rd)

such that fk → 1R boundedly.

Proof. It suffices to consider the one dimensional case. Let ϕ ∈ C∞c (R) suchthat ϕ ≥ 0, ϕ is supported in (−1, 0) and

∫R ϕ (x) dx = 1. Set ϕε (x) = 1

εϕ(xε ).Then

ϕε ∗ 1[a,b) (x) =

∫Rϕε (y) 1[a,b) (x− y) dy =

∫Rϕ (y) 1[a,b) (x− εy) dy

=

∫ 0

−1

ϕ (y) 1[a,b) (x− εy) dy → 1[a,b) (x) as ε ↓ 0

for all x ∈ R.

Corollary B.8 (C∞ – Uryshon’s Lemma). Given K @@ U ⊂o Rd, thereexists f ∈ C∞c (Rd, [0, 1]) such that supp (f) ⊂ U and f = 1 on K.

Proof. Give the proof given in Guillimen and Hanins here instead.!! Noconvolutions needed.

Let d be the standard metric on Rd and ε := d(K,U c) which is posi-tive since K is compact and d(x, U c) > 0 for all x ∈ K. Further let V :=x ∈ Rd : d(x,K) < ε/3

and then take f = ϕε/3∗1V where ϕt (x) = t−dϕ(x/t)

as in Theorem ?? and ϕ is as in Lemma 11.19. It then follows that

supp (f) ⊂ supp(ϕε/3) + Vε/3 ⊂ V2ε/3 ⊂ U.

Since V2ε/3 is closed and bounded, f ∈ C∞c (U) and for x ∈ K,

f (x) =

∫Rd

1d(y,K)<ε/3 · ϕε/3 (x− y) dy =

∫Rdϕε/3 (x− y) dy = 1.

The proof will be finished after the reader (easily) verifies 0 ≤ f ≤ 1.