lecture 21 laplace transforms revisitedlinks.uwaterloo.ca/amath351docs/set9.pdf · lecture 21...
TRANSCRIPT
Lecture 21
Laplace transforms revisited
You have seen Laplace transforms in an earlier course, e.g., AMATH 250 or MATH 228, so we
shall not dwell on the basics. The most important background material is listed in Section 3.1
of the AMATH 351 Course Notes, which we also include below.
Given a real- or complex-valued function y(t), t ≥ 0, the Laplace transform (LT) of y, to
be denoted by L[y] = Y (s), is defined by
Y (s) =
∫
∞
0
e−sty(t)dt (1)
for all complex values of s such that the above improper integral converges. (That being said,
in most applications we shall be considering only real values of s.) Note that the LT is a linear
operator:
L[c1f + c2g] = c1L[f ] + c2L[g], (2)
provided that each transform exists.
Example 1: f(t) = eat. Then
F (s) = L[eat] =
∫
∞
0
e−sueat dt (3)
=
∫
∞
0
e(a−s)t dt
= limb→∞
∫ b
0
e(a−s)t dt
= limb→∞
[
1
a− se(a−s)t
]t=b
t=0
=1
a− slimb→∞
[
e(a−s)b − 1]
=1
s− a, Re(s) > Re(a).
Note: From this point onward, we shall understand the following meaning of the improper
integral,∫
∞
0
h(t) dt = limb→∞
∫ b
0
h(t) dt , (4)
200
and omit writing “limb→∞”.
Example 2: f(t) = tn, n = 0, 1, 2, · · · .
L[eat] =
∫
∞
0
e−sutn dt (5)
=n!
sn+1, Re(s) > 0.
Definition: If f(t) is piecewise continuous on each interval [0, b] for b > 0 and there is a
constant α such that f(t) = O(eαt) as t → +∞, then f(t) is said to be of exponential order α
as t → +∞. In this case, F (s) = L[f ] exists for Re(s) > α.
Note: The mathematical statement,
f(t) = O(g(t)) as t → a , (6)
(where a could be ±∞), means that there exists a constant K ≥ 0 such that
|f(t)| ≤ K|g(t)| for all t in some neighbourhood of a . (7)
A list of LTs for functions commonly encountered in Calculus and applications
along with some basic properties of LTs is presented in the AMATH 351 Course
Notes by J. Wainwright on Page 233.
Differentiation formulae:
1. If f is continuous and f ′ is piecewise continous on any interval [0, b], b > 0, and f is of
exponential order α as t → +∞, then
L[f ′] = sL[f ]− f(0), for Re(s) > α. (8)
201
Proof:
L[f ′] =
∫
∞
0
e−stf ′(t) dt
= e−stf(t)∣
∣
∞
0+ s
∫
∞
0
e−stf(t) dt (integration by parts)
= −f(0) + sL[f ] . (9)
2. Replacing f by f ′ yields the result
L[f ′′] = s2L[f ]− sf(0)− f ′(0). (10)
Shift theorems:
1. First shift theorem: If F (s) = L[f ] exists for Re(s) > α ≥ 0, then
L[ectf ] = F (s− c) for Re(s− c) > α, (11)
where c is a constant.
Proof: Let g(t) = ectf(t) for t ≥ 0. Then
G(s) = L[g] =
∫
∞
0
e−stectf(t) dt
=
∫
∞
0
e−(s−c)tf(t) dt
= F (s− c) . (12)
2. Second shift theorem: If F (s) = L[f ] exists for Re(s) > α ≥ 0, and c is a positive
constant, then
L[H(t− c)f(t− c)] = e−csF (s) for Re(s) > α, (13)
where H is the Heaviside step function, defined by
H(t) =
0, if t < 0
1, if t ≥ 0(14)
202
Proof: Let g(t) = H(t−c)f(t−c), which implies that g(t) = 0 for t < c and g(t) = f(t−c)
for t ≥ c. Then
G(s) = L[g] =
∫
∞
c
e−stf(t− c) dt
=
∫
∞
0
e−s(v+c)f(v) dv (change of variable v = t− c =⇒ t = v + c · · · )
= e−sc
∫
∞
0
e−svf(v) dv
= e−scF (s) . (15)
Solving linear second order DEs with LTs
Let’s review quickly how solutions to linear second order differential equations with constant
coefficients can be obtained from LTs.
Consider the problem
y′′ + 3y′ + 2y = et, y(0) = 1, y′(0) = 2. (16)
Of course, we could solve this problem using the “traditional method:”
1. Find linearly independent solutions u1 and u2 to the associated homogeneous DE and
then construct its general solution yh = C1u1 + C2u2,
2. Find the particular solution yp of the inhomogeneous DE,
3. Construct the general solution y = yh + yp and then solve for C1 and C1 to fit the initial
conditions.
Taking LTs of both sides of the DE:
L[y′′ + 3y′ + 2y] = L[et] (17)
The RHS is simply1
s− 1. By linearity of the LT, the LHS becomes
L[y′′] + 3L[y′] + 2L[y] = [s2Y (s)− sy(0)− y′(0)] + 3[sY (s)− y(0)] + 2Y (s) (18)
= (s2 + 3s+ 2)Y (s)− s− 5
203
Thus Eq. (17) becomes
(s+ 1)(s+ 2)Y (s)− (s+ 5) =1
s− 1, (19)
which can be rearranged to give
Y (s) =s+ 5
(s+ 1)(s+ 2)+
1
(s− 1)(s+ 1)(s+ 2). (20)
The following partial fraction decompositions can be performed:
s+ 5
(s+ 1)(s+ 2)=
4
s+ 1− 3
s+ 2(21)
and1
(s− 1)(s+ 1)(s+ 2)=
1
6· 1
s− 1− 1
2· 1
s + 1+
1
3· 1
s+ 2, (22)
to give
Y (s) =7
2· 1
s+ 1− 8
3· 1
s+ 2+
1
6· 1
s− 1. (23)
Taking inverse LTs gives
y(t) =7
2e−t − 8
3e−2t +
1
6et . (24)
At this point, one may well wonder whether it was worth all of the effort to solve this problem
using Laplace transforms. Perhaps not, for this specific problem, but LTs do reveal some general
mathematical properties that are not evident using the more classical techniques, as we’ll see
below.
With the above comments in mind, we now consider the more general second order IVP as a
motivation for further studies of the LT:
y′′ + py′ + qy = u(t), y(0) = y0, y′(0) = v0. (25)
Taking LTs of both sides yields, after a little algebra,
Y (s)[s2 + ps + q]− (s+ p)y0 − v0 = U(s), (26)
where U(s) is the LT of u(t). We may rearrange this equation to solve for Y (s):
Y (s) =U(s)
s2 + ps+ q+
(s+ p)y0 + v0s2 + ps+ q
. (27)
204
Once p and q are prescribed, the second term on the RHS is quite straightforward to invert,
involving either exponentials, trigonometric functions or both. In fact, the reader should rec-
ognize that inverting the second term will yield the solution of the associated homogeneous DE
with the original boundary conditions:
y′′ + py′ + qy = 0, y(0) = y0, y′(0) = v0. (28)
To see this, simply set u(t) = 0 in Eq. (25). Therefore, we can express the LT in Eq. (27) as
Y (s) = Yp(s) + Yh(s), (29)
where Yh = L[yh], yh being the solution of Eq. (28).
205
Lecture 22
Laplace transforms (cont’d)
We continue with our discussion from the previous lecture, namely, more general second order
IVP as a motivation for further studies of the LT:
y′′ + py′ + qy = u(t), y(0) = y0, y′(0) = v0. (30)
Taking LTs of both sides yields, after a little algebra,
Y (s)[s2 + ps + q]− (s+ p)y0 − v0 = U(s), (31)
where U(s) is the LT of u(t). We may rearrange this equation to solve for Y (s):
Y (s) =U(s)
s2 + ps+ q+
(s+ p)y0 + v0s2 + ps+ q
. (32)
The LT in Eq. (27) may be expressed as
Y (s) = Yp(s) + Yh(s), (33)
where Yh = L[yh], yh being the solution of Eq. (28).
Let us now focus on the LT involving the transform U(s) and write it as
Yp(s) = G(s)U(s), where G(s) =1
s2 + ps+ q. (34)
For reasons that will become clear below, G(s) is known as the transfer function associated
with the linear differential operator,
y′′ + py′ + qy , (35)
that comprises our original problem. If we now apply the inverse LT to (34), we obtain
yp(t) = L−1[G(s)U(s)]. (36)
206
If we knew the functional form of u(t), hence G(s), we might well be able to invert the above
LT to obtain yp. But one may well wish to ask, “How is the RHS, i.e., the inverse transform of
G(s)U(s), related, if at all, to the inverse transforms L−1[G(s)] and L−1[U(s)]?” Is it, by any
chance, the product of the inverse transforms L−1[G(s)] and L−1[U(s)]? The answer, unfortu-
nately, is “no.”
Example: Given
f(t) = 1 and g(t) = t. (37)
Then
F (s) =1
sand G(s) =
1
s2. (38)
Let us now write G(s) as
G(s) =1
s· 1s. (39)
Then
L−1[G(s)] = L−1[1
s· 1s] = t. (40)
But
L−1[1
s] = 1. (41)
Therefore, we see that
t = L−1[1
s2] 6= L−1[
1
s] · L−1[
1
s] = 1 (42)
In general, suppose that F (s), G(s) and H(s) are LTs of, respectively, f(t), g(t) and h(t). And
suppose further that
H(s) = F (s)G(s). (43)
The question is, “How are f and g related to h? The answer is that h is the convolution of f
and g, denoted as f ∗ ∗g and defined as follows:
h(t) = (f ∗ g)(t) =∫ t
0
f(t− τ)g(τ) dτ . (44)
207
A change of variables shows that
(f ∗ g)(t) = (g ∗ f)(t) =∫ t
0
g(t− τ)f(τ) dτ . (45)
Example: Let f(t) = 1 and g(t) = t. Then
F (s) =1
s, G(s) =
1
s2, implying that H(s) = F (s)G(s) =
1
s3. (46)
From the formula
L[tn] = n!
sn+1, (47)
we have that
L[t2] = 2
s3, (48)
so that
h(t) = L−1
[
1
s3
]
=1
2L−1
[
2
s3
]
=1
2t2. (49)
Let us now compute the convolution of f and g:
(f ∗ g)(t) =
∫ t
0
1 · τ dτ (50)
=1
2t2
= h(t).
It shouldn’t matter whether we write H(s) = F (s)G(s) or H(s) = G(s)F (s). Therefore, we
should obtain the same result if we “convolve” g with f . Let’s see:
(g ∗ f)(t) =
∫ t
0
(t− τ) · 1 dτ (51)
=
[
tτ − 1
2τ 2]τ=t
τ=0
=1
2t2
= h(t).
We now state the formal result.
208
Convolution Theorem: Suppose that f(t) and g(t) have Laplace transforms F (s) and G(s),
respectively, for Re(s) > a. Define H(s) = F (s)G(s). Then the inverse Laplace transform
h(t) = L−1[H(s)] = (f ∗ g)(t). (52)
Proof: Note, by definition, that
F (s)G(s) =
∫
∞
0
e−sxf(x) dx
∫
∞
0
e−syg(y) dy (53)
=
∫
∞
0
∫
∞
0
e−s(x+y)f(x)g(y) dxdy .
Now make the following change of variables
u = x+ y, v = x, (54)
so that
y = u− x = u− v. (55)
Then x ≥ 0 implies that v ≥ 0. And y ≥ 0 implies that u−v ≥ 0 or u ≥ v. We now implement
the above change of variables in the integration as follows:
∫
∞
0
∫
∞
0
e−s(x+y)f(x)g(y) dxdy =
∫
∞
0
∫ u
0
e−suf(v)g(u− v)
∣
∣
∣
∣
∂(x, y)
∂(v, u)
∣
∣
∣
∣
dvdu. (56)
The Jacobian of the transformation (x, y) → (u, v) is computed to be
∣
∣
∣
∣
∂(x, y)
∂(v, u)
∣
∣
∣
∣
=
∣
∣
∣
∣
∣
∣
∂x∂v
∂x∂u
∂y
∂v
∂y
∂u
∣
∣
∣
∣
∣
∣
=
∣
∣
∣
∣
∣
∣
1 0
−1 1
∣
∣
∣
∣
∣
∣
= 1. (57)
We use this result and rewrite the above double integration as follows:
F (s)G(s) =
∫
∞
0
e−su
[∫ u
0
f(v)g(u− v) dv
]
du (58)
=
∫
∞
0
e−suh(u) du,
where
h(t) =
∫ t
0
f(τ)g(t− τ)dτ = (g ∗ f)(t) = (f ∗ g)(t). (59)
209
The proof is complete.
Let us now return to the general second order problem in Eq. (25),
y′′ + py′ + qy = u(t), y(0) = y0, y′(0) = v0. (60)
Taking LTs of both sides yielded the relation
Y (s) = G(s)U(s) +(s+ p)y0 + v0s2 + ps+ q
(61)
= Yp(s) + Yh(s),
where
G(s) =1
s2 + ps+ q. (62)
It is straightforward to find the inverse LTs of G(s) and Yh(s). These functions, g(t) and yh(t),
respectively, will involve exponentials and/or trigonometric functions. Taking inverse LTs of
both sides of Eq. (61) gives
y(t) = yp(t) + yh(t), (63)
where
yp(t) = (g ∗ u)(t) =∫ t
0
g(t− τ)u(τ) dτ , (64)
with
g(t) = L−1
[
1
s2 + ps+ q
]
, (65)
and
yh(t) = L−1
[
(s+ p)y0 + v0s2 + ps+ q
]
. (66)
Note that because yh satisfies the homogeneous DE with the initial value conditions
y(0) = y0 , y′(0) = v0 , (67)
it follows that the particular solution yp(t) satisfies the initial conditions
yp(0) = 0, y′p(0) = 0. (68)
210
Example 1: The initial value problem
y′′ + ω2y = u(t), y(0) = y0, y′(0) = v0. (69)
(This is a more general version of Example 3.2, p. 120, of the AMATH 351 Course Notes by
J. Wainwright.) For pedagogical purposes, we’ll start at the beginning and take LTs of both
sides:
s2Y (s)− sy0 − v0 + ω2Y (s) = U(s), (70)
which can be rearranged to
(s2 + ω2)Y (s) = U(s) + y0s+ v0 . (71)
Solving algebraically for Y (s),
Y (s) =U(s)
s2 + ω2+ y0
s
s2 + ω2+ v0
1
s2 + ω2, (72)
which agrees with the general form in Eq. (61). Taking inverse LTs gives
y(t) =1
ω
∫ t
0
sinω(t− τ)u(τ) dτ + y0 cosωt+v0ω
sinωt. (73)
Example 2: Example 1 above with u(t) = t. Then (Exercise)
yp(t) =1
ω
∫ t
0
τ sinω(t− τ) dτ =t
ω2− 1
ω3sinωt. (74)
Note that yp(0) = 0 and y′p(0) = 0. The solution of the IVP is then
y(t) =t
ω2+ y0 cosωt+
(
v0ω
− 1
ω3
)
sinωt. (75)
At this point, we mention that the above solution could have been obtained from other
standard methods, e.g., variation of parameters, where one assumes a particular solution of the
form,
yp(t) = u(t) cosωt+ v(t) sinωt , (76)
where u(t) and v(t) are to be determined (subject to another condition). But the Laplace
transform method reveals some underlying mathematical structure, e.g., the transfer function,
along with how the particular solution can be expressed as a convolution.
211
Lecture 23
Laplace transforms (cont’d) – the “Dirac delta function”
Suppose you have a chemical (or radioactive, for that matter) species “X” in a beaker that
decays according to the rate lawdx
dt= −kx. (77)
where x(t) is the concentration at time t. Suppose that at time t = 0, there is x0 amount of X
present. Of course, if the beaker is left alone, then the amount of X at time t ≥ 0 will be given
by
x(t) = x0e−kt. (78)
Now suppose that at time a > 0, you quickly (i.e., instantaneously) add an amount A > 0 of
X to the beaker. Then what is x(t), the amount of X at time t ≥ 0?
Well, for 0 ≤ t < a, i.e., for all times before you add A units of X to the beaker, x(t) is
given by Eq. (78) above. Then at t = a, there would have been x0e−ka in the beaker, but you
added A, to give x(a) = x0e−ka+A. Then starting at t = a, the system will evolve according to
the rate law. We can consider x(a) as the new initial condition and measure time from t = a.
The amount of X in the beaker will then be
x(t) = (x0e−ka + A)e−k(t−a)
= x0e−kt + Ae−k(t−a) for t ≥ a. (79)
Let’s summarize our result compactly: For the above experiment, the amount of X in the
beaker will be given by
x(t) =
x0e−kt, 0 ≤ t < a,
x0e−kt + Ae−k(t−a), t ≥ a.
(80)
A qualitative sketch of the solution is given below. Clearly, the solution x(t) has a discontinuity
at t = a, the time of instantaneous addition of the amount A. Otherwise, it is differentiable at
all other points.
212
x(a) +A
0 a t
x(t) vs. t
x(a)
x0
A
Note that we can write the solution in Eq. (80) even more compactly as follows,
x(t) = x0e−kt + Ae−k(t−a)H(t− a), t ≥ 0, (81)
where H(t) is the Heaviside function reviewed earlier. We shall return to this solution a little
later.
We now ask whether the above operation – the instanteous addition of an amount A of X
to the beaker – can be represented mathematically, perhaps with a function f(t) so that the
evolution of x(t) can be expressed as
dx
dt= −kx+ f(t), (82)
where f(t) models the instantaneous addition of an amount A of X at time t = a. The answer
is “Yes,” and f(t) will be the so-called “Dirac delta function” f(t) = Aδ(t− a).
But in order to appreciate this result, let us now consider the case of a less brutal addition
of X to the beaker. Suppose that we add an amount of A units of X but over a time interval
of length ∆. We’ll also add X to the beaker at a a constant rate of A/∆ units per unit time.
This means that our evolution equation for x(t) will take the form
dx
dt=
−kx, 0 ≤ t < a,
−kx+ A∆, a ≤ t ≤ a+∆,
−kx, t ≥ a+∆
(83)
213
We can express this in compact form as
dx
dt= −kx + f∆(t), (84)
where
f∆(t) =A
∆[H(t− a)−H(t− (a +∆))] . (85)
A graph of the function f∆(t) is sketched below. Note that the area under the curve and above
the x-axis is A, as it should be – when you integrate a rate function over a time interval [a, b],
you obtain the amount added over that time interval.
A∆
0 ta+∆a
f∆(t) vs. t
We can solve the DE in (84) in two ways: (1) as a linear first-order inhomogeneous DE,
(2) using Laplace Transforms. Since this is a section on LT’s, let’s use Method (2). (I’ll leave
Method (1) as an exercise.) Taking LTs of both sides of (84) yields
sX(s)− x0 = −kX(s) + F∆(s) . (86)
We then solve for X(s):
X(s) =x0
s+ k+
1
s + kF∆(s). (87)
Noting that the inverse LT of1
s+ kis e−kt, we have, after taking inverse LTs:
x(t) = x0e−kt + f∆(t) ∗ e−kt. (88)
Note that the first term x0e−kt is the solution to the homogeneous DE associated with (84).
214
Now compute the convolution of f∆ with e−kt as follows:
f∆(t) ∗ e−kt =
∫ t
0
f∆(τ)e−k(t−τ) dτ (89)
=A
∆
∫ t
0
e−k(t−τ)H(τ − a) dτ − A
∆
∫ t
0
e−k(t−τ)H(τ − (a+∆)) dτ
= I1(t)− I2(t)
Because of the Heaviside function H(τ −a), the integrand in the first integral I1(t) – hence the
integral itself – will be zero for 0 ≤ t < a. For t ≥ a, we can compute I1(t) as follows:
I1(t) =A
∆
∫ t
a
e−k(t−τ) dτ =A
∆e−kt
∫ t
a
ekτdτ (90)
=A
k∆
[
1− e−k(t−a)]
, t ≥ a.
Likewise, we can determine the second integral to be
I2(t) =A
k∆
[
1− e−k(t−(a+∆))]
, t ≥ a +∆. (91)
The final result for x(t) is
x(t) = x0e−kt + I1(t)H(t− a)− I2(t)H(t− (a+∆)). (92)
The qualitative behaviour of the graph of x(t) is sketched below. Perhaps one of the most im-
portant features of this solution is that it is continuous for all time. The fact that we started
to add substance X at time t = a does not produce a sudden jump in the amount x(t) in the
beaker. This is because the rate of addition is finite. The amount of X being added is, in fact,
an integration over this finite rate. That being said, the derivative x′(t) is not continuous at
t = a and t = a +∆, which is a consequence of the instantaneous changes in the rate function
f∆(t) at t = a and t = a +∆.
Just one short note: It is possible that x(t) is decreasing over the interval [a, a + ∆], during
which time the amount A is added. But if ∆ is sufficiently small, i.e., the rate A/∆ is large
enough, x(t) will be increasing over this interval.
215
∆
0 a t
x(t) vs. t
x(a)
x0
A
x(a) +A
a+∆
But this point is rather secondary. What is of prime importance is the difference in con-
centrations between time t = a, the time at which we began to add X to the beaker, and time
t = a + ∆, the time at which we stopped adding X . Remember that regardless of ∆, we are
always adding a total amount of A to the beaker. This difference in concentrations is given by
x(a +∆)− x(a) = x0e−k(a+∆) +
A
k∆
[
1− e−k∆]
− x0e−ka
= −x0e−ka
[
1− e−k∆]
+A
k∆
[
1− e−k∆]
. (93)
In particular, we are interested in what happens to this difference as ∆ → 0, i.e., the time
interval over which we add A units of X goes to zero. The first term on the RHS of (93) clearly
vanishes when ∆ = 0. Determining the limit of the second term in the limit ∆ → 0 is a little
more delicate. We first expand the exponential involving ∆ to give
e−k∆ = 1− k∆+O(∆2) as ∆ → 0, (94)
so that
1− e−k∆ = k∆+O(∆2) as ∆ → 0. (95)
This implies thatA
k∆
[
1− e−k∆]
= A+O(∆) as ∆ → 0 . (96)
Substitution into (93) yields the result,
x(a +∆)− x(a) → A as ∆ → 0. (97)
216
In other words, in the limit ∆ → 0, the graph of x(t) will exhibit a discontinuity at t = a. The
magnitude of this “jump” is A, precisely was found with the earlier method.
We now return to the inhomogeneous DE in (84) and examine the behaviour of the inho-
mogeneous term f∆(t) as ∆ → 0. Recall that this is the “driving” term, the function that
models the addition of a total of A units of X to the beaker over the time interval [a, a +∆].
The width of the box that makes up the graph of f∆(t) is ∆. The height of the box isA
∆. In
this way, the area of the box – the total amount of X delivered – is A. As ∆ decreases, the box
gets thinner and higher. In the limit ∆ → 0, we have produced a function f0(t) that is zero
everywhere except at t = a, where it is undefined. This is the idea behind the “Dirac delta
function,” which we explore in more detail below.
The Dirac delta function
Let us define the following function Iǫ(t) for an ǫ > 0:
Iǫ(t) =
1ǫ, 0 ≤ t ≤ ǫ,
0, t > ǫ(98)
The graph of Iǫ(t) is sketched below.
1
ǫ
0 ǫ t
Iǫ(t) vs. t
217
Clearly∫
∞
−∞
Iǫ(t) dt = 1, for all ǫ > 0. (99)
Now let f(t) be a continuous function on [0,∞) and consider the integrals
∫
∞
−∞
f(t)Iǫ(t) dt =1
ǫ
∫ ǫ
0
f(t) dt, (100)
in particular for small ǫ and limit ǫ → 0. For any ǫ > 0, because of the continuity of f(t), there
exists, by the Mean Value Theorem for Integrals, a cǫ ∈ [0, ǫ] such that
∫ ǫ
0
f(t) dt = f(cǫ) · ǫ. (101)
Therefore,∫ ǫ
0
f(t)Iǫ(t) dt = f(cǫ). (102)
As ǫ → 0, cǫ → 0 since the interval [0, ǫ] → {0}. Therefore
limǫ→0
∫ ǫ
0
f(t)Iǫ(t) dt = f(0). (103)
We may, of course, translate the function Iǫ(t) to produce the result
limǫ→0
∫
∞
−∞
f(t)Iǫ(t− a) dt = f(a). (104)
This is essentially a definition of the “Dirac delta function,” which is not a function but
rather a “generalized function” that is defined in terms of integrals over continuous functions.
In proper mathematical parlance, the Dirac delta function is a distribution. The Dirac delta
function δ(t) is defined by the integral
∫
∞
−∞
f(t)δ(t) dt. (105)
Moreover∫ d
c
f(t)δ(t) dt = 0, if c > 0. (106)
As well, we have the translation result
∫
∞
−∞
f(t)δ(t− a) dt = f(a). (107)
218
Finally, we compute the Laplace transform of the Dirac delta function. In this case f(t) =
e−st so that
L[δ(t− a)] =
∫
∞
0
e−stδ(t− a) dt = e−as, a ≥ 0. (108)
Let us now return to the substance X problem examined earlier where an amount A of
substance X is added to a beaker over a time interval of length ∆. Comparing the function
f∆(t) used for that problem and the function Iǫ(t) defined above, we see that ∆ = ǫ and
f∆(t) = AI∆(t). (109)
From our discussion above on the Dirac delta function, in the limit ∆ → 0 the function f∆(t)
becomes
f(t) = Aδ(t− a). (110)
Therefore, the differential equation for x(t) modelling the instantaneous addition of A to the
beaker is given bydx
dt= −kx + Aδ(t− a). (111)
We now determine x(t) using Eq. (88) which was obtained by using Laplace transforms,
replacing f∆(t) with f(t) = Aδ(t− a). Formally,
x(t) = x0ekt + f(t) ∗ e−kt
= x0e−kt + Aδ(t− a) ∗ e−kt . (112)
We now compute the convolution involving the Dirac delta function. For convenience, we shall
reverse the order of the functions, i.e.,
Aδ(t− a) ∗ e−kt = e−kt ∗ Aδ(t− a) (113)
= A
∫ t
0
e−k(t−τ)δ(τ − a) dτ
The only contribution to this integral comes at the point τ = a. As such, the integral is zero
for 0 ≤ t < a. For t ≥ a, this integral is nonzero and becomes
A
∫ t
0
e−k(t−τ)δ(τ − a) dτ = Ae−k(t−a), t ≥ a. (114)
219
Therefore, the solution x(t) becomes
x(t) = x0e−kt + Ae−k(t−a)H(t− a), (115)
in agreement with the result obtained in Eq. (81). We have therefore shown that the term
Aδ(t− a) correctly models the instantaneous addition of A units of substance X at time t = a.
Alternate ways of defining the Dirac delta function
There are many other ways to construct the Dirac delta function. For example, one could make
the function Iǫ(t) symmetric about the point 0 by defining it as follows:
Iǫ(t) =
1ǫ, − ǫ
2≤ t ≤ ǫ
2,
0, |t| > ǫ2
(116)
Smoother functions may also be considered. For example, consider the following function,
called a “Gaussian,”
Gσ(t) =1
σ√2π
e−t2
2σ2 . (117)
The graphs of some Gaussian functions for various σ values are sketched below.
0
0.5
1
1.5
2
-3 -2 -1 0 1 2 3
G(t)
t
Gaussian functions
sigma = 1
sigma = 0.5
sigma = 0.25
The Gaussian function defined above is normalized, i.e.
∫
∞
−∞
Gσ(t) dt = 1. (118)
You may have encountered this function in statistics courses – it is is the so-called “normal”
distribution function for random variables (in this case, with mean zero). The quantity σ > 0
220
is known as the standard deviation and characterizes the width or spread of the curve. As
σ → 0, the width of the curve decreases and the height increases, so as to preserve the area
under the curve. In the limit σ → 0, the Gaussian Gσ(t) approaches the Dirac delta function
in the context of integration – for any continuous function f(t),
limσ→0
∫
∞
−∞
f(t)Gσ(t) dt = f(0). (119)
221