appendix: fourier methods - cornell...

14
Copyright Oxford University Press 2006 v1.0 Appendix: Fourier methods A A.1 Fourier conventions 299 A.2 Derivatives, convolutions, and correlations 302 A.3 Fourier methods and function space 303 A.4 Fourier and translational symmetry 305 Why are Fourier methods important? Why is it so useful for us to transform functions of time and space y(x,t) into functions of frequency and wavevector y(k, ω)? Humans hear frequencies. The human ear analyzes pressure varia- tions in the air into dierent frequencies. Large frequencies ω are perceived as high pitches; small frequencies are low pitches. The ear, very roughly, does a Fourier transform of the pressure P (t) and trans- mits | P (ω)| 2 to the brain. 1 1 Actually, this is how the ear seems to work, but not how it does work. First, the signal to the brain is time depen- dent, with the tonal information chang- ing as a word or tune progresses; it is more like a wavelet transform, giving the frequency content in various time slices. Second, the phase information in P is not completely lost; power and pitch are the primary signal, but the relative phases of dierent pitches are also perceptible. Third, experiments have shown that the human ear is very nonlinear in its mechanical response. Diraction experiments measure Fourier components. Many experi- mental methods diract waves (light, X-rays, electrons, or neutrons) oof materials (Section 10.2). These experiments typically probe the absolute square of the Fourier amplitude of whatever is scattering the incoming beam. Common mathematical operations become simpler in Fourier space. Derivatives, correlation functions, and convolutions can be written as simple products when the functions are Fourier transformed. This has been important to us when calculating correlation functions (eqn 10.4), summing random variables (Exercises 1.2 and 12.11), and calculating susceptibilities (eqns 10.30, 10.39, and 10.53, and Exercise 10.9). In each case, we turn a calculus calculation into algebra. Linear dierential equations in translationally-invariant systems have solutions in Fourier space. 2 We have used Fourier methods for solving 2 Translation invariance in Hamiltonian systems implies momentum conserva- tion. This is why in quantum mechan- ics Fourier transforms convert position- space wavefunctions into momentum- space wavefunctions—even for systems which are not translation invariant. the diusion equation (Section 2.4.1), and more broadly to solve for correlation functions and susceptibilities (Chapter 10). In Section A.1 we introduce the conventions typically used in physics for the Fourier series, Fourier transform, and fast Fourier transform. In Section A.2 we derive their integral and dierential properties. In Sec- tion A.3, we interpret the Fourier transform as an orthonormal change- of-basis in function space. And finally, in Section A.4 we explain why Fourier methods are so useful for solving dierential equations by ex- ploring their connection to translational symmetry. A.1 Fourier conventions Here we define the Fourier series, the Fourier transform, and the fast Fourier transform, as they are commonly defined in physics and as they are used in this text.

Upload: others

Post on 26-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

Appendix: Fourier methods AA.1 Fourier conventions 299

A.2 Derivatives, convolutions,and correlations 302

A.3 Fourier methods and functionspace 303

A.4 Fourier and translationalsymmetry 305

Why are Fourier methods important? Why is it so useful for us totransform functions of time and space y(x, t) into functions of frequencyand wavevector !y(k,!)?

• Humans hear frequencies. The human ear analyzes pressure varia-tions in the air into di!erent frequencies. Large frequencies ! areperceived as high pitches; small frequencies are low pitches. The ear,very roughly, does a Fourier transform of the pressure P (t) and trans-mits | !P (!)|2 to the brain.1 1Actually, this is how the ear seems to

work, but not how it does work. First,the signal to the brain is time depen-dent, with the tonal information chang-ing as a word or tune progresses; it ismore like a wavelet transform, givingthe frequency content in various timeslices. Second, the phase informationin !P is not completely lost; power andpitch are the primary signal, but therelative phases of di!erent pitches arealso perceptible. Third, experimentshave shown that the human ear is verynonlinear in its mechanical response.

• Di!raction experiments measure Fourier components. Many experi-mental methods di!ract waves (light, X-rays, electrons, or neutrons)o! of materials (Section 10.2). These experiments typically probe theabsolute square of the Fourier amplitude of whatever is scattering theincoming beam.

• Common mathematical operations become simpler in Fourier space.Derivatives, correlation functions, and convolutions can be written assimple products when the functions are Fourier transformed. This hasbeen important to us when calculating correlation functions (eqn 10.4),summing random variables (Exercises 1.2 and 12.11), and calculatingsusceptibilities (eqns 10.30, 10.39, and 10.53, and Exercise 10.9). Ineach case, we turn a calculus calculation into algebra.

• Linear di!erential equations in translationally-invariant systems havesolutions in Fourier space.2 We have used Fourier methods for solving 2Translation invariance in Hamiltonian

systems implies momentum conserva-tion. This is why in quantum mechan-ics Fourier transforms convert position-space wavefunctions into momentum-space wavefunctions—even for systemswhich are not translation invariant.

the di!usion equation (Section 2.4.1), and more broadly to solve forcorrelation functions and susceptibilities (Chapter 10).

In Section A.1 we introduce the conventions typically used in physicsfor the Fourier series, Fourier transform, and fast Fourier transform. InSection A.2 we derive their integral and di!erential properties. In Sec-tion A.3, we interpret the Fourier transform as an orthonormal change-of-basis in function space. And finally, in Section A.4 we explain whyFourier methods are so useful for solving di!erential equations by ex-ploring their connection to translational symmetry.

A.1 Fourier conventions

Here we define the Fourier series, the Fourier transform, and the fastFourier transform, as they are commonly defined in physics and as theyare used in this text.

Page 2: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

300 Appendix: Fourier methods

The Fourier series for functions of time, periodic with period T , is

!ym =1T

" T

0y(t) exp(i!mt) dt, (A.1)

where !m = 2"m/T , with integer m. The Fourier series can be re-summed to retrieve the original function using the inverse Fourier series:

y(t) =!#

m="!!ym exp(!i!mt). (A.2)

Fourier series of functions in space are defined with the opposite signconvention3 in the complex exponentials. Thus in a three-dimensional3This inconsistent convention allows

waves of positive frequency to propa-gate forward rather than backward. Asingle component of the inverse trans-form, eik·xe!i!t = ei(k·x!!t) propa-gates in the +k direction with speed!/|k|; had we used a +i for Fouriertransforms in both space and timeei(k·x+!t) would move backward (along!k) for ! > 0.

box of volume V = L "L"L with periodic boundary conditions, theseformulæ become

!yk =1V

"y(x) exp(!ik · x) dV, (A.3)

andy(x) =

#

k

!yk exp(ik · x), (A.4)

where the k run over a lattice of wavevectors

k(m,n,o) = [2"m/L, 2"n/L, 2"o/L] (A.5)

in the box.The Fourier transform is defined for functions on the entire infinite

line:!y(!) =

" !

"!y(t) exp(i!t) dt, (A.6)

where now ! takes on all values.4 We regain the original function by

4Why do we divide by T or L for theseries and not for the transform? Imag-ine a system in an extremely large box.Fourier series are used for functionswhich extend over the entire box; hencewe divide by the box size to keep themfinite as L " #. Fourier transformsare usually used for functions whichvanish quickly, so they remain finite asthe box size gets large.

ttt t t t t ttt ï4 ï3 ï2 ï1 0 1 2 3 4ï5

y

t...

t

5

~ ( )

t

Fig. A.1 Approximating the in-tegral as a sum. By approximat-ing the integral

$!y(!) exp(i!t) d! as a

sum over the equally-spaced points !m,%m !y(!) exp(i!mt)"!, we can con-

nect the formula for the Fourier trans-form to the formula for the Fourierseries, explaining the factor 1/2" ineqn A.7.

doing the inverse Fourier transform:

y(t) =12"

" !

"!!y(!) exp(!i!t) d!. (A.7)

This is related to the inverse Fourier series by a continuum limit (Fig.A.1):

12"

"d! # 1

2"

#

!

"! =12"

#

!

2"T

=1T

#

!

, (A.8)

where the 1/T here compensates for the factor of T in the definitionsof the forward Fourier series. In three dimensions the Fourier transformformula A.6 is largely unchanged,

!y(k) ="

y(x) exp(!ik · x) dV, (A.9)

while the inverse Fourier transform gets the cube of the prefactor:

y(x) =1

(2")3

" !

"!!y(k) exp(ik · x) dk. (A.10)

Page 3: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

A.1 Fourier conventions 301

The fast Fourier transform (FFT) starts with N equally-spaced datapoints y", and returns a new set of complex numbers !yFFT

m :

!yFFTm =

N"1#

"=0

y" exp(i2"m#/N), (A.11)

with m = 0, . . . , N ! 1. The inverse of the FFT is given by

y" =1N

N"1#

m=0

!yFFTm exp(!i2"m#/N). (A.12)

The FFT essentially samples the function y(t) at equally-spaced pointst" = #T/N for # = 0, . . . , N ! 1:

!yFFTm =

N"1#

"=0

y" exp(i!mt"). (A.13)

It is clear from eqn A.11 that !yFFTm+N = !yFFT

m , so the fast Fourier trans-form is periodic with period !N = 2"N/T . The inverse transform canalso be written

y" =1N

N/2#

m="N/2+1

!yFFTm exp(!i!mt"), (A.14)

where we have centered5 the sum !m at ! = 0 by using the periodicity.6

5If N is odd, to center the FFT the sumshould be taken over !(N!1)/2 $ m $(N ! 1)/2.

6Notice that the FFT returns the neg-ative ! Fourier coe#cients as the lasthalf of the array, m = N/2 + 1, N/2 +2, . . . . (This works because !N/2 + jand N/2+j di!er by N , the periodicityof the FFT.) One must be careful aboutthis when using Fourier transformsto solve calculus problems numeri-cally. For example, to evolve a density#(x) under the di!usion equation (Sec-tion 2.4.1) one must multiply the firsthalf of the array !#m by exp(!Dk2

mt) =exp(!D[m(2"/L)]2t) but multiply thesecond half by exp(!D(K ! km)2t) =exp(!D[(N ! m)(2"/L)]2t).

Often the values y(t) (or the data points y") are real. In this case,eqns A.1 and A.6 show that the negative Fourier amplitudes are thecomplex conjugates of the positive ones: !y(!) = !y#(!!). Hence for realfunctions the real part of the Fourier amplitude will be even and theimaginary part will be odd.7

7This allows one to write slightly fasterFFTs specialized for real functions.One pays for the higher speed by anextra programming step unpacking theresulting Fourier spectrum.

The reader may wonder why there are so many versions of roughlythe same Fourier operation.

(1) The function y(t) can be defined on a finite interval with periodicboundary conditions on (0, T ) (series, FFT) or defined in all space(transform). In the periodic case, the Fourier coe#cients are definedonly at discrete wavevectors !m = 2"m/T consistent with the pe-riodicity of the function; in the infinite system the coe#cients aredefined at all !.

(2) The function y(t) can be defined at a discrete set of N points tn =n"t = nT/N (FFT), or at all points t in the range (series, trans-form). If the function is defined only at discrete points, the Fouriercoe#cients are periodic with period !N = 2"/"t = 2"N/T .8

8There is one more logical possibility: adiscrete set of points that fill all space;the atomic displacements in an infi-nite crystal is the classic example. InFourier space, such a system has con-tinuous k, but periodic boundary con-ditions at ±K/2 = ±"/a (the edges ofthe Brillouin zone).

There are several arbitrary choices made in defining these Fouriermethods, that vary from one field to another.

• Some use the notation j =$!1 instead of i.

Page 4: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

302 Appendix: Fourier methods

• More substantively, some use the complex conjugate of our formulæ,substituting !i for i in the time or space transform formulæ. Thisalternative convention makes no change for any real quantity.99The real world is invariant under the

transformation i % !i, but complexquantities will get conjugated. Swap-ping i for !i in the time series formulæ,for example, would make $""(!) =!Im[$(!)] in eqn 10.31 and wouldmake $ analytic in the lower half-planein Fig. 10.12.

• Some use a 1/$

T and 1/$

2" factor symmetrically on the Fourier andinverse Fourier operations.

• Some use frequency and wavelength (f = 2"! and $ = 2"/k) insteadof angular frequency ! and wavevector k. This makes the transformand inverse transform more symmetric, and avoids some of the pref-actors.

Our Fourier conventions are those most commonly used in physics.

A.2 Derivatives, convolutions, andcorrelations

The important di!erential and integral operations become multiplica-tions in Fourier space. A calculus problem in t or x thus becomes analgebra exercise in ! or k.

Integrals and derivatives. Because (d/dt) e"i!t = !i!ei!t, theFourier coe#cient of the derivative of a function y(t) is !i! times theFourier coe#cient of the function:

dy/dt =#

!ym (!i!m exp(!i!mt)) =#

(!i!m!ym) exp(!i!mt),(A.15)

so&dy

dt

'''''!

= !i!!y!. (A.16)

This holds also for the Fourier transform and the fast Fourier transform.Since the derivative of the integral gives back the original function, theFourier series for the indefinite integral of a function y is thus given bydividing by !i!:

!"y(t) dt =

!y!

!i!= i!y!

!(A.17)

except at ! = 0.1010Either the mean !y(! = 0) is zero or itis non-zero. If the mean of the functionis zero, then !y(!)/! = 0/0 is undefinedat ! = 0. This makes sense; the indef-inite integral has an arbitrary integra-tion constant, which gives its Fourierseries an arbitrary value at ! = 0. Ifthe mean of the function y is not zero,then the integral of the function willhave a term y(t ! t0). Hence the inte-gral is not periodic and has no Fourierseries. (On the infinite interval the in-tegral has no Fourier transform becauseit is not in L2.)

These relations are invaluable in the solution of many linear partialdi!erential equations. For example, we saw in Section 2.4.1 that thedi!usion equation

%&

%t= D

%2&

%x2(A.18)

becomes manageable when we Fourier transform x to k:

%!&k

%t= !Dk2!&, (A.19)

!&k(t) = &k(0) exp(!Dk2t). (A.20)

Correlation functions and convolutions. The absolute square ofthe Fourier transform11 |!y(!)|2 is given by the Fourier transform of the

11The absolute square of the Fouriertransform of a time signal is called thepower spectrum.

Page 5: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

A.3 Fourier methods and function space 303

correlation function C(') = %y(t)y(t + ')&:

|!y(!)|2 = !y(!)#!y(!) ="

dt$ e"i!t!y(t$)"

dt ei!ty(t)

="

dt dt$ ei!(t"t!)y(t$)y(t) ="

d' ei!#

"dt$ y(t$)y(t$ + ')

="

d' ei!#T %y(t)y(t + ')& = T

"d' ei!#C(')

= T !C(!), (A.21)

where T is the total time t during which the Fourier spectrum is beingmeasured. Thus di!raction experiments, by measuring the square of thek-space Fourier transform, give us the spatial correlation function forthe system (Section 10.2).

The convolution12 h(z) of two functions f(x) and g(y) is defined as 12Convolutions show up in sums andGreen’s functions. The sum z = x + yof two random vector quantities withprobability distributions f(x) and g(y)has a probability distribution given bythe convolution of f and g (Exer-cise 1.2). An initial condition f(x, t0)propagated in time to t0 + % is givenby convolving with a Green’s functiong(y, %) (Section 2.4.2).

h(z) ="

f(x)g(z ! x) dx. (A.22)

The Fourier transform of the convolution is the product of the Fouriertransforms. In three dimensions,13

13The convolution and correlation the-orems are closely related; we do convo-lutions in time and correlations in spaceto illustrate both the one-dimensionaland vector versions of the calculation.

!f(k)!g(k) ="

e"ik·xf(x) dx"

e"ik·yg(y) dy

="

e"ik·(x+y)f(x)g(y) dxdy ="

e"ik·z dz"

f(x)g(z ! x) dx

="

e"ik·zh(z) dz = !h(k). (A.23)

A.3 Fourier methods and function space

There is a nice analogy between the space of vectors r in three dimensionsand the space of functions y(t) periodic with period T , which providesa simple way of thinking about Fourier series. It is natural to define ourfunction space to including all complex functions y(t). (After all, wewant the complex Fourier plane-waves e"i!mt to be in our space.) Letus list the following common features of these two spaces.

• Vector space. A vector r = (r1, r2, r3) in R3 can be thought of as areal-valued function on the set {1, 2, 3}. Conversely, the function y(t)can be thought of as a vector with one complex component for eacht ' [0, T ).Mathematically, this is an evil analogy. Most functions which haveindependent random values for each point t are undefinable, unin-tegrable, and generally pathological. The space becomes well de-fined if we confine ourselves to functions y(t) whose absolute squares|y(t)|2 = y(t)y#(t) can be integrated. This vector space of functionsis called L2.14

14More specifically, the Fourier trans-form is usually defined on L2[R], andthe Fourier series is defined on L2[0, T ].

Page 6: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

304 Appendix: Fourier methods

• Inner product. The analogy to the dot product of two three-dimen-sional vectors r · s = r1s1 + r2s2 + r3s3 is an inner product betweentwo functions y and z:

y · z =1T

" T

0y(t)z#(t) dt. (A.24)

You can think of this inner product as adding up all the products ytz#tover all points t, except that we weight each point by dt/T .

• Norm. The distance between two three-dimensional vectors r and sis given by the norm of the di!erence |r ! s|. The norm of a vectoris the square root of the dot product of the vector with itself, so|r! s| =

((r ! s) · (r ! s). To make this inner product norm work in

function space, we need to know that the inner product of a functionwith itself is never negative. This is why, in our definition A.24, wetook the complex conjugate of z(t). This norm on function space iscalled the L2 norm:

||y||2 =

)1T

" T

0|y(t)|2 dt. (A.25)

Thus our restriction to square-integrable functions makes the norm ofall functions in our space finite.1515Another important property is that

the only vector whose norm is zero isthe zero vector. There are many func-tions whose absolute squares have in-tegral zero, like the function which iszero except at T/2, where it is one, andthe function which is zero on irrationalsand one on rationals. Mathematiciansfinesse this di#culty by defining thevectors in L2 not to be functions, butrather to be equivalence classes of func-tions whose relative distance is zero.Hence the zero vector in L2 includes allfunctions with norm zero.

• Basis. A natural basis for R3 is given by the three unit vectors x1, x2,x3. A natural basis for our space of functions is given by the functions*fm = e"i!mt, with !m = 2"m/T to keep them periodic with period T .

• Orthonormality. The basis in R3 is orthonormal, with xi · xj equal-ing one if i = j and zero otherwise. Is this also true of the vectors inour basis of plane waves? They are normalized:

|| *fm||22 =1T

" T

0|e"i!mt|2 dt = 1. (A.26)

They are also orthogonal, with

*fm · *fn =1T

" T

0e"i!mtei!nt dt =

1T

" T

0e"i(!m"!n)t dt

=1

!i(!m ! !n)Te"i(!m"!n)t

'''T

0= 0 (A.27)

(unless m = n) since e"i(!m"!n)T = e"i2$(m"n) = 1 = e"i0.• Coe!cients. The coe#cients of a three-dimensional vector are given

by taking dot products with the basis vectors: rn = r·xn. The analogyin function space gives us the definition of the Fourier coe#cients,eqn A.1:

!ym = y · *fm =1T

" T

0y(t) exp(i!mt) dt. (A.28)

• Completeness. We can write an arbitrary three-dimensional vectorr by summing the basis vectors weighted by the coe#cients: r =

Page 7: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

A.4 Fourier and translational symmetry 305

%rnxn. The analogy in function space gives us the formula A.2 for

the inverse Fourier series:

y =!#

m="!!ym*fm,

y(t) =!#

m="!!ym exp(!i!mt).

(A.29)

One says that a basis is complete if any vector can be expanded inthat basis. Our functions *fm are complete in L2.16 16You can imagine that proving they

are complete would involve showingthat there are no functions in L2 whichare ‘perpendicular’ to all the Fouriermodes. This is the type of tough ques-tion that motivates the mathematicalfield of real analysis.

Our coe#cient eqn A.28 follows from our completeness eqn A.29 andorthonormality:

!y"?= y · *f" =

+#

m

!ym*fm

,· *f"

=#

m

!ym

-*fm · *f"

.= !y" (A.30)

or, writing things out,

!y"?=

1T

" T

0y(t)ei!!t dt

=1T

" T

0

+#

m

!yme"i!mt

,ei!!t dt

=#

m

!ym

+1T

" T

0e"i!mtei!!t dt

,= !y". (A.31)

Our function space, together with our inner product (eqn A.24), is aHilbert space (a complete inner product space).

A.4 Fourier and translational symmetry

Fig. A.2 The mapping T! takesfunction space into function space,shifting the function to the right bya distance ". For a physical systemthat is translation invariant, a solutiontranslated to the right is still a solution.

Why are Fourier methods so useful? In particular, why are the solutionsto linear di!erential equations so often given by plane waves: sines andcosines and eikx?17 Most of our basic equations are derived for systems

17It is true, we are making a big dealabout what is usually called the separa-tion of variables method. But why doesseparation of variables so often work,and why do the separated variables sooften form sinusoids and exponentials?

with a translational symmetry. Time-translational invariance holds forany system without an explicit external time-dependent force; invarianceunder spatial translations holds for all homogeneous systems.

Why are plane waves special for systems with translational invariance?Plane waves are the eigenfunctions of the translation operator. DefineT!, an operator which takes function space into itself, and acts to shiftthe function a distance " to the right:18

18That is, if g = T!{f}, then g(x) =f(x ! "), so g is f shifted to the rightby ".

T!{f}(x) = f(x ! "). (A.32)

Any solution f(x, t) to a translation-invariant equation will be mappedby T! onto another solution. Moreover, T! is a linear operator (trans-lating the sum is the sum of the translated functions). If we think of the

Page 8: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

306 Appendix: Fourier methods

translation operator as a big matrix acting on function space, we canask for its eigenvalues19 and eigenvectors (or eigenfunctions) fk:19You are familiar with eigenvectors of

3 & 3 symmetric matrices M , whichtransform into multiples of themselveswhen multiplied by M , M · en = &nen.The translation T! is a linear operatoron function space just as M is a linearoperator on R3.

T!{fk}(x) = fk(x ! ") = $kfk(x). (A.33)

This equation is solved by our complex plane waves fk(x) = eikx, with$k = e"ik!.20

20The real exponential eAx is also aneigenstate, with eigenvalue e!A!. Thisis also allowed. Indeed, the di!usionequation is time-translation invariant,and it has solutions which decay ex-ponentially in time (e!!kteikx, with!k = Dk2). Exponentially decayingsolutions in space also arise in sometranslation-invariant problems, such asquantum tunneling and the penetrationof electromagnetic radiation into met-als.

Why are these eigenfunctions useful? The time evolution of an eigen-function must have the same eigenvalue $! The argument is somethingof a tongue-twister: translating the time-evolved eigenfunction gives thesame answer as time evolving the translated eigenfunction, which is timeevolving $ times the eigenfunction, which is $ times the time-evolvedeigenfunction.21

21Written out in equations, this sim-ple idea is even more obscure. Let Ut

be the time-evolution operator for atranslationally-invariant equation (likethe di!usion equation of Section 2.2).That is, Ut{#} evolves the function#(x, %) into #(x, % + t). (Ut is not trans-lation in time, but evolution in time.)Because our system is translation in-variant, translated solutions are alsosolutions for translated initial condi-tions: T!{Ut{#}} = Ut{T!{#}}. Now,if #k(x, 0) is an eigenstate of T! witheigenvalue &k, is #k(x, t) = Ut{#k}(x)an eigenstate with the same eigenvalue?Yes indeed:

T!{#k(x, t)} = T!{Ut{#k(x, 0)}}= Ut{T!{#k(x, 0)}}= Ut{&k#k(x, 0)}= &kUt{#k(x, 0)}

= &k#k(x, t) (A.34)

because the evolution law Ut is linear.

The fact that the di!erent eigenvalues do not mix under time evolu-tion is precisely what made our calculation work; time evolving A0eikx

had to give a multiple A(t)eikx since there is only one eigenfunction oftranslations with the given eigenvalue. Once we have reduced the par-tial di!erential equation to an ordinary di!erential equation for a feweigenstate amplitudes, the calculation becomes feasible.

Quantum physicists will recognize the tongue-twister above as a state-ment about simultaneously diagonalizing commuting operators: sincetranslations commute with time evolution, one can find a complete setof translation eigenstates which are also time-evolution solutions. Math-ematicians will recognize it from group representation theory: the solu-tions to a translation-invariant linear di!erential equation form a repre-sentation of the translation group, and hence they can be decomposedinto irreducible representations of that group. These approaches are ba-sically equivalent, and very powerful. One can also use these approachesfor systems with other symmetries. For example, just as the invarianceof homogeneous systems under translations leads to plane-wave solutionswith definite wavevector k, it is true that:

• the invariance of isotropic systems (like the hydrogen atom) under therotation group leads naturally to solutions involving spherical harmon-ics with definite angular momenta # and m;

• the invariance of the strong interaction under SU(3) leads naturallyto the ‘8-fold way’ families of mesons and baryons; and

• the invariance of the Universe under the Poincare group of space–time symmetries (translations, rotations, and Lorentz boosts) leadsnaturally to particles with definite mass and spin!

Page 9: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

Exercises 307

ExercisesWe begin with three Fourier series exercises, Sound wave,Fourier cosines (numerical), and Double sinusoids. Wethen explore Fourier transforms with Fourier Gaussians(numerical) and Uncertainty, focusing on the e!ects oftranslating and scaling the width of the function to betransformed. Fourier relationships analyzes the normal-ization needed to go from the FFT to the Fourier se-ries, and Aliasing and windowing explores two commonnumerical inaccuracies associated with the FFT. Whitenoise explores the behavior of Fourier methods on ran-dom functions. Fourier matching is a quick, visual test ofone’s understanding of Fourier methods. Finally, Gibbsphenomenon explores what happens when you torture aFourier series by insisting that smooth sinusoids add upto a function with a jump discontinuity.

(A.1) Sound wave. !1A musical instrument playing a note of fre-quency !1 generates a pressure wave P (t) pe-riodic with period 2"/!1: P (t) = P (t +2"/!1). The complex Fourier series of this wave(eqn A.2) is zero except for m = ±1 and ±2, cor-responding to the fundamental !1 and the firstovertone. At m = 1, the Fourier amplitude is2" i, at m = "1 it is 2 + i, and at m = ±2 it is3. What is the pressure P (t):(A) exp ((2 + i)!1t) + 2 exp (3!1t),(B) exp (2!1t) exp (i!1t)# 2 exp (3!1t),(C) cos 2!1t" sin!1t + 2 cos 3!1t,(D) 4 cos!1t" 2 sin!1t + 6 cos 2!1t,(E) 4 cos!1t + 2 sin!1t + 6 cos 2!1t?

(A.2) Fourier cosines. (Computation) !2In this exercise, we will use the computer to il-lustrate features of Fourier series and discretefast Fourier transforms using sinusoidal waves.Download the Fourier software, or the relevanthints files, from the computer exercises sectionof the book web site [129].22

First, we will take the Fourier series of periodicfunctions y(x) = y(x + L) with L = 20. Wewill sample the function at N = 32 points, anduse a FFT to approximate the Fourier series.The Fourier series will be plotted as functions

of k, at "kN/2, . . . , kN/2!2, kN/2!1. (Remem-ber that the negative m points are given by thelast half of the FFT.)(a) Analytically (that is, with paper and pencil)derive the Fourier series !ym in this interval forcos(k1x) and sin(k1x). Hint: They are zero ex-cept at the two values m = ±1. Use the spatialtransform (eqn A.3).(b) What spacing #k between k-points km do youexpect to find? What is kN/2? Evaluate each an-alytically as a function of L and numerically forL = 20.Numerically (on the computer) choose a cosinewave A cos(k(x " x0)), evaluated at 32 pointsfrom x = 0 to 20 as described above, withk = k1 = 2"/L, A = 1, and x0 = 0. Exam-ine its Fourier series.(c) Check your predictions from part (a) for theFourier series for cos(k1x) and sin(k1x). Checkyour predictions from part (b) for #k and forkN/2.Decrease k to increase the number of wave-lengths, keeping the number of data pointsfixed. Notice that the Fourier series looks fine,but that the real-space curves quickly beginto vary in amplitude, much like the patternsformed by beating (superimposing two waves ofdi!erent frequencies). By increasing the num-ber of data points, you can see that the beat-ing e!ect is due to the small number of pointswe sample. Even for large numbers of sampledpoints N , though, beating will still happen atvery small wavelengths (when we get close tokN/2). Try various numbers of waves m up toand past m = N/2.

(A.3) Double sinusoid. !2Which picture represents the spatial Fourierseries (eqn A.4) associated with the functionf(x) = 3 sin(x) + cos(2x)? (The solid line isthe real part, the dashed line is the imaginarypart.)

22If this exercise is part of a computer lab, one could assign the analytical portions as a pre-lab exercise.

Page 10: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

308 Appendix: Fourier methods

(A)-2 -1 0 1 2

Wavevector k-2

-1

0

1

2y(k)

Re[y(k)]Im[y(k)]~

~

~

(B)-2 -1 0 1 2

Wavevector k-2

-1

0

1

2

y(k)

Re[y(k)]Im[y(k)]~

~

~

(C)-2 -1 0 1 2

Wavevector k-2

-1

0

1

2

y(k)

Re[y(k)]Im[y(k)]~

~

~

(D)-2 -1 0 1 2

Wavevector k-2

-1

0

1

2

y(k)

Re[y(k)]Im[y(k)]~

~

~

(E)-2 -1 0 1 2

Wavevector k-2

-1

0

1

2

y(k)

Re[y(k)]Im[y(k)]~

~

~

(A.4) Fourier Gaussians. (Computation) !2In this exercise, we will use the computer to il-lustrate features of Fourier transforms, focusingon the particular case of Gaussian functions, butillustrating general properties. Download theFourier software or the relevant hints files fromthe computer exercises portion of the text website [129].23

The Gaussian distribution (also known as thenormal distribution) has the form

G(x) =1$2"$

exp/"(x" x0)

2/2$20 , (A.35)

where $ is the standard deviation and x0 is thecenter. Let

G0(x) =1$2"

exp/"x2/2

0(A.36)

be the Gaussian of mean zero and $ = 1. TheFourier transform of G0 is another Gaussian, ofstandard deviation one, but no normalizationfactor:24

!G0(k) = exp("k2/2). (A.38)

In this exercise, we study how the Fourier trans-form of G(x) varies as we change $ and x0.Widths. As we make the Gaussian narrower(smaller $), it becomes more pointy. Shorterlengths mean higher wavevectors, so we expectthat its Fourier transform will get wider.(a) Starting with the Gaussian with $ = 1, nu-merically measure the width of its Fourier trans-form at some convenient height. (The full widthat half maximum, FWHM, is a sensible choice.)Change $ to 2 and to 0.1, and measure thewidths, to verify that the Fourier space widthgoes inversely with the real width.(b) Analytically show that this rule is true ingeneral. Change variables in eqn A.6 to showthat if z(x) = y(Ax) then !z(k) = !y(k/A)/A.Using eqn A.36 and this general rule, write aformula for the Fourier transform of a Gaussiancentered at zero with arbitrary width $.

23If this exercise is taught as a computer lab, one could assign the analytical portions as a pre-lab exercise.24Here is an elementary-looking derivation. We complete the square inside the exponent, and change from x to y = x + ik:

1'

2"

" #

!#e!ikx exp(!x2/2) dx =

1'

2"

" #

!#exp(!(x+ik)2/2) dx exp((ik)2/2) =

1" #+ik

!#+ik

1'

2"exp(!y2/2) dy

2exp(!k2/2).

(A.37)The term in brackets is one (giving us e!k2/2) but to show it we need to shift the integration contour from Im[y] = k toIm[y] = 0, which demands Cauchy’s theorem (Fig. 10.11).

Page 11: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

Exercises 309

(c) Analytically compute the product "x "k ofthe FWHM of the Gaussians in real and Fourierspace. (Your answer should be independent ofthe width $.) This is related to the Heisen-berg uncertainty principle, "x"p % ", whichyou learn about in quantum mechanics.Translations. Notice that a narrow Gaussiancentered at some large distance x0 is a reason-able approximation to a #-function. We thusexpect that its Fourier transform will be similarto the plane wave !G(k) % exp("ikx0) we wouldget from #(x" x0).(d) Numerically change the center of the Gaus-sian. How does the Fourier transform change?Convince yourself that it is being multiplied bythe factor exp("ikx0). How does the power spec-trum | !G(!)|2 change as we change x0?(e) Analytically show that this rule is also truein general. Change variables in eqn A.6 toshow that if z(x) = y(x " x0) then !z(k) =exp("ikx0)!y(k). Using this general rule, extendyour answer from part (b) to write the formulafor the Fourier transform of a Gaussian of width$ and center x0.

(A.5) Uncertainty. !2

-6 -4 -2 0 2 4 6Position x

0

0.1

0.2

0.3

0.4

G(x

)

G(x)

G0(x)

Fig. A.3 Real-space Gaussians.

The dashed line in Fig. A.3 shows

G0(x) = 1/$

2" exp("x2/2). (A.39)

The dark line shows another function G(x). Theareas under the two curves G(x) and G0(x) arethe same. The dashed lines in the choices be-low represent the Fourier transform !G0(k) =exp("k2/2). Which has a solid curve that rep-resents the Fourier transform of G?

(A)-6 -4 -2 0 2 4 6

Wavevector k0

0.2

0.4

0.6

0.8

1

G(k

)

G(k)

G0(k)

~

~

~

(B)-6 -4 -2 0 2 4 6

Wavevector k0

0.2

0.4

0.6

0.8

1

G(k

) G(k)

G0(k)

~

~

~

(C)-2 0 2

Wavevector k0

0.2

0.4

0.6

0.8

1

G(k

)

G(k)

G0(k)~

~~

(D)-2 0 2

Wavevector k0

0.2

0.4

0.6

0.8

1

G(k

)

G(k)G0(k) ~~~

(E)-2 0 2

Wavevector k0

0.2

0.4

0.6

0.8

1

G(k

)

G(k)

G0(k)

~

~

~

(A.6) Fourier relationships. !2In this exercise, we explore the relationships be-tween the Fourier series and the fast Fouriertransform. The first is continuous and periodic

Page 12: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

310 Appendix: Fourier methods

in real space, and discrete and unbounded inFourier space; the second is discrete and peri-odic both in real and in Fourier space. Thus,we must again convert integrals into sums (asin Fig. A.1).As we take the number of points N in ourFFT to & the spacing between the points getssmaller and smaller, and the approximation ofthe integral as a sum gets better and better.Let y" = y(t") where t" = %(T/N) =%("t). Approximate the Fourier seriesintegral A.1 above as a sum over y",(1/T )

%N!1"=0 y(t") exp("i!mt")"t. For small

positive m, give the constant relating yFFTm to

the Fourier series coe!cient ym.

(A.7) Aliasing and windowing. (Computation) !3

In this exercise, we will use the computerto illustrate numerical challenges in using thefast Fourier transform. Download the Fouriersoftware, or the relevant hints files, from thecomputer exercises section of the book website [129].25

The Fourier series !ym runs over all integersm. The fast Fourier transform runs only over0 ' m < N . There are three ways to under-stand this di!erence: function-space dimension,wavelengths, and aliasing.Function-space dimension. The space of peri-odic functions y(x) on 0 ' x < L is infinite,but we are sampling them only at N = 32points. The space of possible fast Fourier se-ries must also have N dimensions. Now, eachcoe#cient of the FFT is complex (two dimen-sions), but the negative frequencies are complexconjugate to their positive partners (giving twonet dimensions for the two wavevectors km andk!m ( kN!m). If you are fussy, !y0 has no part-ner, but is real (only one dimension), and if Nis even !y!N/2 also is partnerless, but is real. SoN k-points are generated by N real points.Wavelength. The points at which we sample thefunction are spaced #x = L/N apart. It makessense that the fast Fourier transform would stopwhen the wavelength becomes close to #x; wecannot resolve wiggles shorter than our samplespacing.(a) Analytically derive the formula for y" fora cosine wave at kN , the first wavelength notcalculated by our FFT. It should simplify to a

constant. Give the simplified formula for y" atkN/2 (the first missing wavevector after we haveshifted the large ms to N "m to get the nega-tive frequencies). Numerically check your pre-diction for what y" looks like for cos(kNx) andcos(kN/2x).So, the FFT returns Fourier components onlyuntil kN/2 when there is one point per bump(half-period) in the cosine wave.Aliasing. Suppose our function really doeshave wiggles with shorter distances than oursampling distance #x. Then its fast Fouriertransform will have contributions to the long-wavelength coe#cients !yFFT

m from these shorterwavelength wiggles; specifically !ym±N , !ym±2N ,etc. Let us work out a particular case of this: ashort-wavelength cosine wave.(b) On our sampled points x", analyticallyshow that exp(ikm±N x") = exp(ikmx"). Showthat the short-wavelength wave cos(km+Nx") =cos(kmx"), and hence that its fast Fourier trans-form for small m will be a bogus peak at the longwavelength km. Numerically check your predic-tion for the transforms of cos(kx) for k > kN/2.If you sample a function at N points withFourier components beyond kN/2, their contri-butions get added to Fourier components atsmaller wavevectors. This is called aliasing, andis an important source of error in Fourier meth-ods. We always strive to sample enough pointsto avoid it.You should see at least once how aliasing af-fects the FFT of functions that are not sines andcosines. Form a 32-point wave packet y(x) =1/($

2"$) exp("x2/2$2). Change the width $of the packet to make it thinner. Notice thatwhen the packet begins to look ratty (roughlyas thin as the spacing between the sampledpoints x") the Fourier series hits the edges andoverlaps; high-frequency components are ‘foldedover’ or aliased into the lower frequencies.Windowing. One often needs to take Fourierseries of functions which are not periodic inthe interval. Set the number of data points Nto 256 (powers of two are faster) and comparey(x) = cos kmx for m = 20 with an ‘illegal’ non-integer value m = 20.5. Notice that the plot ofthe real-space function y(x) is not periodic inthe interval [0, L) for m = 20.5. Notice that itsFourier series looks pretty complicated. Each

25If this exercise is taught as a computer lab, one could assign the analytical portions as a pre-lab exercise.

Page 13: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

Exercises 311

of the two peaks has broadened into a wholestaircase. Try looking at the power spectrum(which is proportional to |!y|2), and again com-pare m = 20 with m = 20.5. This is a numericalproblem known as windowing, and there are var-ious schemes to minimize its e!ects as well.

(A.8) White noise. (Computation) !2White light is a mixture of light of all frequen-cies. White noise is a mixture of all sound fre-quencies, with constant average power per unitfrequency. The hissing noise you hear on radioand TV between stations is approximately whitenoise; there are a lot more high frequencies thanlow ones, so it sounds high-pitched.Download the Fourier software or the relevanthints files from the computer exercises portionof the text web site [129].What kind of time signal would generate whitenoise? Select White Noise, or generate inde-pendent random numbers y" = y(%L/N) cho-sen from a Gaussian26 distribution &(y) =(1/$

2") exp("y2/2$). You should see ajagged, random function. Set the number ofdata points to, say, 1024.Examine the Fourier transform of the noise sig-nal. The Fourier transform of the white noiselooks amazingly similar to the original signal.It is di!erent, however, in two important ways.First, it is complex: there is a real part and animaginary part. The second is for you to dis-cover.Examine the region near k = 0 on the Fourierplot, and describe how the Fourier transform ofthe noisy signal is di"erent from a random func-tion. In particular, what symmetry do the realand imaginary parts have? Can you show thatthis is true for any real function y(x)?Now examine the power spectrum |!y|2.27 Checkthat the power is noisy, but on average iscrudely independent of frequency. (You cancheck this best by varying the random numberseed.) White noise is usually due to random,uncorrelated fluctuations in time.

(A.9) Fourier matching. !2The top three plots (a)–(c) in Fig. A.4 are func-tions y(x) of position. For each, pick out whichof the six plots (1)–(6) are the correspondingfunction !y) in Fourier space? (Dark line isreal part, lighter dotted line is imaginary part.)(This exercise should be fairly straightforwardafter doing Exercises A.2, A.4, and A.8.)

(A.10) Gibbs phenomenon. (Mathematics) !3In this exercise, we will look at the Fourier seriesfor the step function and the triangle function.They are challenging because of the sharp cor-ners, which are hard for sine waves to mimic.Consider a function y(x) which is A in therange 0 < x < L/2 and "A in the rangeL/2 < x < L (shown above). It is a kind ofstep function, since it takes a step downward atL/2 (Fig. A.5).28

(a) As a crude approximation, the step func-tion looks a bit like a chunky version of a sinewave, A sin(2"x/L). In this crude approxima-tion, what would the complex Fourier series be(eqn A.4)?(b) Show that the odd coe!cients for the com-plex Fourier series of the step function are !ym ="2Ai/(m") (m odd). What are the even ones?Check that the coe!cients !ym with m = ±1 areclose to those you guessed in part (a).(c) Setting A = 2 and L = 10, plot the par-tial sum of the Fourier series (eqn A.1) form = "n,"n + 1, . . . , n with n = 1, 3, and 5.(You are likely to need to combine the coe!-cients !ym and !y!m into sines or cosines, unlessyour plotting package knows about complex expo-nentials.) Does it converge to the step function?If it is not too inconvenient, plot the partial sumup to n = 100, and concentrate especially on theovershoot near the jumps in the function at 0,L/2, and L. This overshoot is called the Gibbsphenomenon, and occurs when you try to ap-proximate functions y(x) which have disconti-nuities.One of the great features of Fourier series is thatit makes taking derivatives and integrals easier.What does the integral of our step function look

26We choose the numbers with probability given by the Gaussian distribution, but it would look about the same if we tooknumbers with a uniform probability in, say, the range (!1, 1).27For a time signal f(t), the average power at a certain frequency is proportional to |f(!)|2; ignoring the proportionalityconstant, the latter is often termed the power spectrum. This name is sometimes also used for the square of the amplitude ofspatial Fourier transforms as well.28It can be written in terms of the standard Heaviside step function $(x) = 0 for x < 0 and $(x) = 1 for x > 0, asy(x) = A (1 ! 2$(x ! L/2)).

Page 14: Appendix: Fourier methods - Cornell Universitypages.physics.cornell.edu/~sethna/teaching/QuantumI/HW/FourierAp… · Fourier methods are so useful for solving di fferential equations

Copyright Oxford University Press 2006 v1.0

312 Appendix: Fourier methods

0 10 20x

-1

0

1y(x)

-1 0 1k

-0.5

0

0.5

y(k)

-1 0 1k

-1

0

1

y(k)

-10 0 10x

0

0.4

y(x)

0k

-1

0

1

y(k)

-4 0 4k

-1

0

1

y(k)

0 20 40x

-2

0

2

y(x)

-20 0 20k

-10

0

10

y(k)

-20 0 20k

-5

0

5

y(k)

(a) (b) (c)

(1) (2) (3)

(4) (5) (6)

~ ~ ~~~~

Fig. A.4 Fourier matching.

like? Let us sum the Fourier series for it!(d) Calculate the Fourier series of the integralof the step function, using your complex Fourierseries from part (b) and the formula A.17 for theFourier series of the integral. Plot your results,doing partial sums up to ±m = n, with n = 1,3, and 5, again with A = 2 and L = 10. Wouldthe derivative of this function look like the stepfunction? If it is convenient, do n = 100, andnotice there are no overshoots.

0 L 0.2 L 0.4 L 0.6 L 0.8 L 1 Lx

-1 A

-0.5 A

0 A

0.5 A

1 A

y(x)

Step function y(x)Crude approximation

Fig. A.5 Step function.