ut austin undergraduate math journal - ut math club · special feature 1 from the editors thank you...
TRANSCRIPT
Vol. 1, No. 1 Spring 2019
UT Austin UndergraduateMath Journal
A Student Publication of UT Austin
Table of Contents
1 From the Editors 2
2 Clifford Algebras 3
Orthogonal Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Hyperplanes and the Cartan-Dieudonne Theorem . . . . . . . . . . . . . . . . . 4
The Clifford Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 An Interview with Prof. Freed 10
4 Building and Solving Differential Equations Using Electronic Circuits 17
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Building Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Operational Amplifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 A Word from the Students 23
6 Something to finish on. . . 25
SPECIAL FEATURE
1
From the Editors
Thank you so much for taking the time to check out the UT Austin Undergraduate
Math Journal! As you are probably aware, this is the first issue of the journal. There
have been many learning experiences along the way – ironing out LATEX issues, editing
articles, conducting and transcribing longform interviews, etc. There are of course many
things we can (and will) improve on in future volumes. As such, we welcome any and all
constructive feedback you might have. Whether it’s comments on content or aesthetics,
please send your thoughts to [email protected]. We hope you enjoy reading
this journal as much as we enjoyed putting it together. Ciao!
STUDENT ARTICLE
2
Clifford Algebras
Jeffrey Jiang, ’19
Orthogonal Groups
An important group in mathematics is the orthogonal group On, which consists of
linear transformations T : Rn → Rn such that 〈Tv, Tw〉 = 〈v, w〉 for all v, w ∈ Rn (such
transformations are said to preserve the inner product 〈·, ·〉). This inner product induces
a norm | · | on Rn given by |v| = 〈v, v〉1/2, which in turn introduces a notion of length in
Rn. This also gives us a notion of the angle θ between two vectors v, w ∈ Rn via
θ = arccos
(|〈v, w〉||v||w|
).
Since On preserves the inner product, it also preserves these notions of angle and length.
The induced norm makes Rn into a metric space with distance function d : Rn ×Rn → Rgiven by d(v, w) = |v − w|. It follows that On is the group of linear isometries of the
metric space (Rn, d).
The orthogonal group plays a large role in physics, since many of the linear transfor-
mations we want to observe in the natural world preserve our perceptions of angle and
length. For intuition, try thinking about what kind of shape these groups can have. In
one dimension, we don’t have too much to work with since any linear map preserving the
absolute value must be multiplication by ±1. Therefore, we find that O1 = {± id}.Things are a little more interesting in two dimensions. Note first of all that any A ∈ O2
preserving length and angle must necessarily preserve the unit circle S1 ⊂ R2. Any such
transformation is then (almost) uniquely determined by the angle by which it rotates
4
a single vector, as well as whether it flips the orientation of R2. Heuristically, what we
mean by orientation is a choice of clockwise or counterclockwise. A rotation by some
angle θ preserves orientation, but a reflection, say across the y-axis, flips the orientation.
From our description, we now see that O2 should look like two disjoint circles: one circle
for all the rotations, and another separate circle of rotations following a reflection. We
call the orientation preserving component SO2, which you can think of as the group of
rotations. The components of O2 form a group, which we denote π0(O2). This follows
from the fact that the composition of two orientation reversing transformations preserves
orientation, which gives a group isomorphism π0(O2) ∼= Z/2Z.
Going back a bit, why did we say that the transformation is almost uniquely determined
by the angle? The reason is that a rotation by angle θ is the same as a rotation by 2π− θ.This means that SO2 might not be parameterized by a circle as we might have thought!
In fact, though, SO2 is isomorphic to a circle (why?) – a fact that does not generalize
to higher dimensions. Indeed, the same observation as above in the three-dimensional
case gives us that SO3 is isomorphic not to S2 but rather to RP3! It’s a good mental
exercise to figure out what happens here, and why the two- and three-dimensional cases
are different. You might want the think of RP3 as the unit sphere S3 ⊂ R4 with antipodal
points identified via v ∼ −v. Can you see why this is the right picture? If not, don’t
worry: after some work we will have a more rigorous explanation.
Summarizing our results from tinkering around with orthogonal groups in low dimen-
sions:
1. The composition of an even number of orientation reversing maps is orientation
preserving, i.e. π0(On) ∼= Z/2Z.
2. Using spheres to describe orthogonal transformations has redundancies, which can
be attributed to antipodal points. Somehow, v and −v encode the same data for
an orthogonal transformation.
Hyperplanes and the Cartan-Dieudonne Theorem
Definition. Let V be a real vector space of dimension n. A hyperplane in V is an
(n− 1)-dimensional subspace P ⊂ V .
If V has an inner product then any hyperplane P is determined by the line given by
its orthogonal complement P⊥. In addition, since we have a notion of length, a line is
determined by the unique vector v ∈ P with norm 1 (called the unit normal). If V
doesn’t have an inner product then there is in general no way to make these distinctions.
But wait, something is wrong with what we just said! There isn’t a unique vector with
norm 1 in P : −v is just as good. This is one of the key points we observed about the
orthogonal group, which suggests that something deeper is going on here. We can capture
this deeper relation by taking a hyperplane P ⊂ Rn and defining a map RP : Rn → Rn
5
that reflects Rn about P (e.g., reflection about a line in R2 or reflection about a plane in
R3). If we let v be one of the two unit normal vectors then RP is given by the formula
RP (w) = w − 2〈w, v〉v.
How do we interpret the formula? A hyperplane reflection should not change the
components of a vector that lie in P , instead flipping only the component orthogonal to
P . This is accomplished exactly by subtracting 2〈w, v〉v from w.
We’re now ready for a key piece of the puzzle.
Theorem (Cartan-Dieudonne). Any orthogonal transformation A ∈ On can be written
as a composition of ≤ n hyperplane reflections
Proof. We prove a slightly more general statement. Let V be a finite dimensional vector
space equipped with an inner product 〈·, ·〉, and let O(V ) denote the group of linear maps
V → V that preserve the inner product. We claim that any element of O(V ) can be
written as a composition of ≤ dimV hyperplane reflections. The proof is by induction
on dimV .
The 1-dimensional case is easy, since the only hyperplane in a 1-dimensional vector
space V is the zero vector, so the only elements of O(V ) are exactly idV and − idV (we
say that idV is the composition of 0 hyperplane reflections).
Now, let V be an n-dimensional vector space with inner product 〈·, ·〉. Fix A ∈ O(V )
and a nonzero vector v ∈ V . We want to find a hyperplane reflection R : V → V such
that RAv = v. To do this, let R be the hyperplane reflection about the hyperplane
bisecting v and Av. Explicitly, R is given by the formula
Rw = w − 2〈Av − v, v〉
〈Av − v,Av − v〉v.
Then, we have that RA is an orthogonal transformation that fixes v since
Rv = v − 2〈Av − v, v〉〈Av − v,Av − v〉
v
= v − 2〈Av, v〉 − 2〈v, v〉〈Av,Av〉 − 2〈Av, v〉+ 〈v, v〉
v
= v − 2〈Av, v〉 − 2〈v, v〉2〈Av, v〉 − 2〈v, v〉
v
= v − (−1)(Av − v)
= Av.
Since RA ∈ O(V ), RA therefore fixes the orthogonal complement v⊥. This is a hyperplane
in V with inner product obtained by restricting the inner product on V . The restriction
of RA to v⊥ is an orthogonal transformation of v⊥ and so is the composition of ≤ (n− 1)
hyperplane reflections in O(v⊥) by the inductive hypothesis. Since RA fixes v, we can
6
extend each of these hyperplane reflections to a hyperplane reflection on all of V by
first extending each hyperplane to a hyperplane in V (just take the span with v) and
then taking the corresponding reflection in V . Thus, RA is a composition of ≤ n − 1
hyperplane reflections in V . Since R2 = idV , composing with R gives that R2A = A is a
composition of ≤ n hyperplane reflections. This completes the induction. �
The stage is now set for the construction of the Clifford algebra.
The Clifford Algebra
Definition. Equip Rn with the standard inner product 〈·, ·〉. The Clifford algebra for
Rn is a unital associative R-algebra Cliff(n) generated by Rn, subject to the relations
1. v2 = −1 if |v| = 1,
2. vw = −wv if 〈v, w〉 = 0.
By unital, we mean that we have thrown in a new element e that acts as the multi-
plicative unit. Therefore, we have that for any λ ∈ R and v ∈ Cliff(n), we must have
λe · v = λv, so by a slight abuse of notation, we follow the standard convention in which
the multiplicative unit is denoted 1. When we say that Cliff(n) is generated by Rn, we
mean that every element of Cliff(n) can be written as a finite formal sum of products
of vectors in Rn (which then reduces by the relations specified above). Therefore, the
standard basis {ei} of Rn is a generating set for Cliff(n), yielding the basis
{ei1 . . . eik : 0 ≤ k ≤ n, 1 ≤ i1, . . . ik ≤ n}.
Thus, a basis for Cliff(n) is just the set of all products of basis vectors with increasing
indices, along with 1. For example, a basis for Cliff(3) as a vector space is given by
{1, e1, e2, e3, e1e2, e1e3, e2e3, e1e2e3}.
This characterization allows us to see that the dimension of Cliff(n) as a real vector space
is 2n. We also note that Cliff(n) contains a subspace
Span(e1, . . . , en) ∼= Rn.
What is the motivation for this definition? A unit vector v ∈ Rn ⊂ Cliff(n) should
represent a hyperplane reflection about the plane perpendicular to v. In particular, since
hyperplane reflections square to the identity, we want the same to be true here (but with
an added sign).1 Another item of motivation is that hyperplane reflections about planes
1Our choice of sign is simply convention: the choice v2 = 1 works just as good. One benefit of our choice
is that some formulas will look cleaner.
7
determined by orthogonal vectors commute. Try messing around with such reflections in
the two- and three-dimensional settings: pay attention to signs!
We now derive a helpful formula.
Lemma. Let v, w ∈ Rn ⊂ Cliff(n). Then, vw + wv = −2〈v, w〉.
Proof. Let v =∑
i viei and w =
∑j w
jej . We compute
vw + wv =∑i
∑j
(viwjeiej + viwjejei)
=∑i
∑j
viwj(eiej + ejei)
Since e2i = −1 and 〈ei, ej〉 is 0 for i 6= j and 1 for i = j, this sum collapses to∑i
viwi(e2i + e2i ) = −2∑i
viwi = −2〈v, w〉. �
Another useful observation about Cliff(n) is that it comes with a linear map T from
Cliff(n) to itself such that T 2 = id (called an involution). This map is uniquely determined
by reversing the order of products of basis vectors a la
T (ei1 . . . eik) = eik . . . ei1
and then extending linearly to the rest of Cliff(n). For an arbitrary g ∈ Cliff(n), we use
the notation gT = T (g). The similarity to the notation for matrix transposition is not a
coincidence!
Consider now the multiplicative group Cliff(n)× of invertible elements of Cliff(n). Note
that we have to be a little careful working with this group since it is nonabelian. There
is a nice subgroup G ⊂ Cliff(n)× generated by the unit vectors of Rn. Identifying Rn as
a subspace of Cliff(n), we claim that there is a (somewhat) natural left action of G on
Rn given by
g · w = gwgT
for g ∈ G and w ∈ Rn. Of course, at the outset we have no reason to even believe that
gwgT is an element of Rn. To verify that we do in fact have a well-defined action of G, it
suffices to check on the generating set of unit vectors. Let v ∈ Rn with |v| = 1. Writing
v =∑
i viei, we have
vT = T (v) = T
(∑i
viei
)=∑i
viT (ei) =∑i
viei = v.
8
Using the lemma, we compute
v · w = vwv
= (−2〈v, w〉 − wv)v
= −2〈v, w〉v − wv2
= w − 2〈v, w〉v ∈ Rn,
which is exactly hyperplane reflection about the hyperplane perpendicular to v! Thus,
not only do we have a group action on our hands, we also know that the generating
set for G acts exactly as the generating set for On. This gives us a natural group
homomorphism ϕ : G → On that sends g ∈ G to the linear transformation w 7→ g · w.
This map is surjective by the Cartan-Dieudonne Theorem and so On ∼= G/ kerϕ by the
First Isomorphism Theorem.
A little work gives that kerϕ = {±1} and so ϕ is a 2-to-1 map. Indeed,
(−v)w(−v) = vwv,
which is an expression of the fact that hyperplane reflection about v⊥ and −v⊥ are
exactly the same. Our larger group G allows us to distinguish between v and −v. In
addition, since any given g ∈ G is a product of an even number of unit vectors, the map
determined by g is orientation preserving since it is a composition of an even number of
hyperplane reflections.
What we just discovered is the group G = Pin(n) that double covers On, as well as
its subgroup Spin(n) generated by even products that double covers SOn. Thus, in one
fell swoop, we’ve addressed two of our earlier observations with the orthogonal group
and encoded them in a new mathematical object – the Clifford algebra. Because of the
way that multiplication in Cliff(n) seems to encode the geometry of Rn, some call the
Clifford algebra Clifford’s geometric algebra.
It’s a useful exercise to characterize these Clifford algebras in more familiar terms. For
example, Cliff(1) is isomorphic to the R-algebra C of complex numbers and Cliff(2) is
isomorphic to the R-algebra H of quaternions, both of which have an involution given
by conjugation q 7→ q. If you’re familiar with computer graphics, you might recall that
rotations are often more compactly represented as quaternions, where the action of a
quaternion q ∈ H on R3 is given by v 7→ qvq. This formula should look awfully familiar:
it tells us that Spin(3) is isomorphic the multiplicative group H× of unit quaternions. The
2-to-1 map ϕ onto SO3 is just the quotient by antipodal points, giving an isomorphism
SO3∼= RP3.
Although the geometric insights involving Pin and Spin are nice, there’s a very beautiful
theory concerning the algebras themselves. If we replace the inner product 〈·, ·〉 with
an arbitrary symmetric bilinear form b : Rn × Rn → R then we can repeat the same
construction as above to obtain Cliff(Rn, b). Every nondegenerate symmetic bilinear form
is uniquely determined by its signature – the number of 1’s and −1’s on the diagonal of
9
its matrix after diagonalizing – so we get an infinite family of algebras Cliff(p, q) (where
p denotes the number of 1’s and q the number of −1’s). Amazingly, this collection of
Clifford algebras is closed under taking tensor products. Via something called Bott
periodicity, it turns out that Cliff(0, n) and Cliff(n, 0) for 0 ≤ n ≤ 8 provide us with
enough information to reconstruct all Clifford algebras by taking tensor products.
With that, I leave you with the abstract definition of a Clifford algebra, which is the
usual context in which one first sees it. Can you see why this abstract definition agrees
with our own definition?
Definition. Given a vector space V and a symmetric bilinear form b : V × V → R, the
Clifford algebra is the data of a unital associative algebra Cliff(V, b) along with a linear
map ι : V → Cliff(V, b) such that, for any linear map ϕ : V → A of V into another unital
associative algebra A satisfying
ϕ(v)2 = b(v, v),
ϕ factors through Cliff(V, b) to give a unique map ϕ : Cliff(V, b) → A such that the
diagram
V
Cliff(V, b) A
ιϕ
∃! ϕ
commutes.
INTERVIEW
3
An Interview with Prof. Freed
Zachary Gardner, ’20
Prof. Dan Freed is a Sid W. Richardson
Foundation Regents Chair in Mathematics
and Professor of Mathematics at the Univer-
sity of Texas at Austin. Prof. Freed’s work
centers around global issues in geometry and
global analysis.
When did your interest in math be-
gin?
I think at a pretty early age. My father
used to go bowling on Sundays and I would
go and try to bowl, though not very suc-
cessfully – I was probably four or five at the
time. Eventually I would wander over and
watch the men play. Pretty soon I was keep-
ing score. I suppose that was a sign that
I had some interest in numbers. That con-
tinued in a strong way through elementary,
middle, and high school. Anyway, as far as
I can remember I’ve always been interested
in math.
When it came time to go to college,
did you know if you wanted to major
in math?
I don’t recall making a conscious decision
– I think it was something I just knew. I
loved math and still love it. I just kept pur-
suing math, through college and into grad
school. At some point, maybe halfway into
my first year of grad school, I fell into a
small depression (if you can call it that)
11
when I realized that I had backed into a
mathematics career without quite choosing
it. I wasn’t depressed in the sense that I felt
I had made a bad choice – it was more like
an awareness of the situation overwhelmed
me for a little bit.
How long did you feel that way?
Not long, maybe a month or two. It didn’t
stop me from functioning or anything like
that. It was like, “Oh, this is where I’m
going. This is my future.” I don’t think
I had ever quite thought about planning a
career in that sense. I just kept following
my interests and that led me where it did.
Do you think your experience was
typical for the time?
Well I can’t say anything certain. Among
those who decided to continue on with their
studies, I think they felt similarly. Some
people were the children of mathematicians
and so it was ingrained in their family. For
others, I can’t really say. For me, being
a mathematician was something I couldn’t
not do.
Did your graduate experience unfold
in the same way? That is, did you
just sort of naturally stumble into
your thesis topic and other things like
that?
Well, even in high school I knew differen-
tial geometry was something I was really in-
terested in. I don’t know why exactly. Even
now, it’s not like I do differential geometry
in a form that most people would recognize,
meaning that I don’t really write papers
focused on the questions or tools of differ-
ential geometry. Probably the best way to
put it is to say differential geometry is my
first mathematical home. I was very fortu-
nate in college to have a group of graduate
students arrive during my junior year who
were all interested in differential geometry.
They had their own learning seminar going
through Spivak’s books on differential ge-
ometry. One of the students needed to learn
Lie algebras and so the group organized a
Sunday morning seminar on Lie algebras
following these beautiful notes written by
Hans Samelson. My job, as the undergrad-
uate, was to provide bagels! Anyway, this
group of graduate students really took me
in. It was a terrific experience and I’m very
thankful to them.
As for my PhD thesis, at some point my
advisor suggested three problems and one
of them developed into my thesis. At that
point loop groups were important infinite
dimensional Lie groups and Lie algebras
were being developed – and a Kahler met-
ric had been introduced on the based loop
group of a compact Lie group. Singer, my
advisor, asked me to compute the curvature
of that metric. I did so – laboriously at
first filling pages of a notebook, but in the
final version just a few paragraphs – and
that computation suggested other problems
and so I was on my way. I strongly believe
that computations are a great way to start
off, and a thesis advisor can do much worse
than suggest a well-chosen computation.
I know from talking to others, grad
students especially, that one of the
things a lot of people have ended up
greatly enjoying and depending on
at times is the social environment of
their institution. It’s a place for peo-
ple to lean on others, ask questions,
and be dumb from time to time.
Well, I think you have to be willing to
be dumb always. The moment you’re not
willing to be dumb is the moment you stop
12
learning and being able to move forward.
If you’re doing research then you’re always
dumb in a sense. But yes, as far as social
environment goes, I think the social environ-
ment is important at all levels and especially
in grad school. When people are picking
out grad schools, they often focus on who
is at each school and what specific research
they are doing. And if you’re advanced
enough then that’s very appropriate. But
you shouldn’t lose sight of the fact that the
grad school years are great years of your life.
If you’re not happy with the environment,
both in terms of the social life in the depart-
ment and physical aspects like geography
and climate, then you’re not going to do
good work. So, choosing a grad school has
to be a decision that takes environment into
account.
To give some personal background, I did
my graduate work at Berkeley. Before that,
I had lived in Chicago and Boston. Berkeley
in the early 80s was a little bit different –
not like Berkeley in the 60s but still a place
with its own quirkiness. In terms of the
math department, it was an amazing time
to be there, especially if you were doing
geometry. There were lots of classes, semi-
nars, and faculty doing interesting geometry
research. I was also playing lots of music
(orchestral trombone), and the Bay Area
offered many wonderful opportunities I was
able to take advantage of.
While I was at Berkeley, MSRI (Mathe-
matical Sciences Research Institute) opened.
I quickly learned that a lot of the people
visiting had left their families behind on
the east coast and come out to MSRI for
a semester. The professors in the math de-
partment were very busy – they often had
closed doors, didn’t want to be in their of-
fices, or were hassled with administrative
tasks and other things of that nature. The
people at the institute were a bit lonely, so
they were happy to talk math with me for
hours. I got a great education out of that!
Berkeley had strong programs in areas that
were of direct interest to me, which was
a great aspect of my graduate experience.
MSRI came along and accelerated the pace
of everything.
I know you were involved in the
founding of PCMI (Park City Math
Institute). Did your experience with
MSRI have any role to play?
I don’t think so, though I’ve never re-
ally thought about that. MSRI and PCMI
are actually very different from each other
– MSRI is primarily a research institution
that runs year-round, while PCMI is much
less focused on research and runs for three
weeks in the summer. I conceived of what
is now called PCMI in terms of what people
call vertical integration, which is the idea
that you should have researchers, graduate
students, undergraduates, and others all in-
teracting in a low-pressure environment. In
a way, the motivation behind PCMI was to
give back. I’ve always had amazing teach-
ers and opportunities, as well as a support-
ive and stimulating environment. PCMI
gives younger people something like that
with a chance to interact with older peo-
ple in a more relaxed setting away from
everyone’s home institution. The PCMI
philosophy also goes the other way around.
From the very beginning, you’re instilling
within young people this idea of mentoring
the next generation by showing them what
mentorship looks like.
Who else was involved in the found-
ing of PCMI?
13
I worked closely with Karen Uhlenbeck,
who was a colleague of mine at UT Austin
for many years and a great friend. Then
there were some mathematicians from the
University of Utah, Jim Carlson and Herb
Clemens. John Polking, the head of the
NSF math division at the time, also played
a very big part. Many others joined in –
building institutions which last is a commu-
nity effort. After a few years, it became
clear that the seed money for PCMI was
going to run out. So, Phil Griffiths, the
director of the IAS (Institute for Advanced
Study) at the time, stepped in. Phil under-
stood that adopting PCMI would be good
for the IAS and so the Institute basically
took over the administrative side of things
and now PCMI is a program of the IAS.
Of course, PCMI has continued to get huge
funding and support from the NSF.
Switching gears a little bit, you’ve
said on your website and elsewhere
that you’ve collaborated with physi-
cists and that your work has overlap
with physics. Could say more about
that?
Well, geometry has a long history of en-
gagement with physics: trigonometry was
introduced to understand astronomical ob-
servations; Gauss came up with the Gauss-
Bonnet Theorem while he was director at
an astronomical observatory; and Newton
developed calculus in conjunction with his
physics theories. There are many more mod-
ern examples too, and our current period is
one of vigorous interaction. Of course, the
degree of engagement has varied over time.
Often, new ideas come into mathematics
from physics and elsewhere, resulting in a
period of internal mathematical work as
these ideas are absorbed. This involves the
development of formalism as a framework
for ideas, as well as theorems and useful
structures. Afterward, these mathematical
fruits are applied back to the physics and
elsewhere.
Deviating from your question a bit,
there’s this question of whether math is
a science or an art. This is of course a
false dichotomy, one of many such (pure vs.
applied, theory-builder vs. problem-solver,
etc.), as math is both a science and an art
in both its input and output. By input, I
mean motivation and inspiration. Math cer-
tainly draws from science (i.e., the physical
world), but it also grows from an artistic
perspective in that we may do math either
because it’s beautiful or the things we’re
working on are of internal importance to
mathematics. By output, I mean the actual
mathematics that is created. Math is an
achievement of the human spirit just like ar-
chitecture, painting, sculpture, music, and
other things typically considered to be arts.
At the same time, math applies to the real
world through either the theoretical aspects
of science or the development of technology,
economy, etc.
Getting back to your question, there’s the
example of the 20th century development
of symplectic geometry as a framework for
classical mechanics, and specifically, classi-
cal field theory, which is something I studied
as an undergraduate. When I got to grad
school, my advisor, Is Singer, was already
teaching courses on topics in theoretical
physics. His advisor, Irving Segal, was very
much involved with physics and the math-
ematics of quantum field theory. Is was
an early champion of the idea that certain
problems in physics could be a huge boon
to mathematics and, conversely, mathemat-
14
ics could have something to say about the
physics. He had already written important
papers solving problems in quantum field
theory and, at the same time, developing
ideas in geometry. With these ideas in mind,
Is ran a weekly seminar at Berkeley with the
aim of learning about supermanifolds and
quantum field theory. Is would teach for
the first two hours, then there would be a
seminar for the next two hours, often with
a physicist as speaker. Discussion would
continue over dinner for another two hours.
Altogether that’s six consecutive hours in
one day! There were many great visitors,
and getting to know them at those dinners
was an important part of my graduate ex-
perience.
Nowadays, interaction between math and
physics is much more typical, with con-
ferences, papers, and even entire research
institutes organized around collaboration.
Currently, quantum field theory and string
theory are a big focus here and elsewhere.
Another thing that’s gaining in popularity
is the interaction between topology and con-
densed matter physics. For me, physics has
always been a topic of interest and a source
of ideas. It’s tremendous fun to collabo-
rate with all sorts of different people with
different backgrounds, even despite the dif-
ficulty involved in learning new languages
and frameworks for certain concepts.
I’ve heard some mathematicians say
that the process of formalizing ideas
in a hot new area of math can some-
times kill a bit of the magic. Do you
have anything to say about that?
Well, I’m an unapologetic mathematician
and so I do believe firmly in the value of set-
ting up a strong mathematical framework.
However, such a framework should be not
be a prison in which we trap ourselves. The
process of developing new math starts by
freely exploring without constraints. Basi-
cally, if you never allow yourself to move
beyond existing frameworks, then you won’t
be able to say something truly new worth
codifying. At the same time, I think new
definitions arise only from engaging with
and solving specific problems. Those defi-
nitions lead to theorems, the theorems to
solutions to other problems, and so the cy-
cle continues. You see that theory-building
and problem-solving go hand-in-hand, and
the magic emerges from the combination.
There are many fundamental and beauti-
ful definitions in math. Take the notion of
a group, for example, which came in part
out of algebraic work of Galois and Gauss
as well as the desire to have an object that
encodes geometric symmetries. Many of the
important properties that groups have come
from the choice of an elegant, sparse defini-
tion. Another beautiful notion is that of the
real numbers. We have a definition that is
also a theorem, which says that the field of
real numbers is the unique complete ordered
field. It’s amazing that just three words –
complete ordered field – can encompass so
much of the power that real numbers give
us. This is especially because, if you were to
go back in time, you likely wouldn’t guess
that such a simple definition would be the
end result of the analysis that preceded it.
So in this case it’s the definition providing
magic rather than taking it away.
Formalization can also provide magic
by connecting seemingly disparate ideas.
For example, there are many different ap-
proaches to QFT (quantum field theory).
There’s a physicist, Nati Seiberg, who likes
to say that there are too many coincidences
15
for us to think that we know the right start-
ing point for QFT. Basically, the coinci-
dences are a sign that we haven’t really un-
derstood the theory. Now, one can hope
that there will be some framework that
helps explain the coincidences, and in a
certain sense there is one already that has
been put forward by Graeme Segal and oth-
ers. Their axiom system is sparse, in the
same way that saying the real numbers are
a complete ordered field is a sparse charac-
terization. But that axiom system is only
a beginning, and has only been fully de-
veloped for theories that are special in the
world of all quantum field theories. All of
that aside, there are limitations to defini-
tions. The definition of a manifold doesn’t
tell you how to construct a manifold, and it
doesn’t give you examples. The definition
simply conveys what a manifold is and, his-
torically speaking, what a manifold ought to
be. The characterization we have of QFTs
gives us some inroad, but there’s still a long
way to go.
Are you actively involved in that
quest for better understanding
QFTs?
In some broad sense, sure. At any given
time, there’s a selection of problems that
I’m working on, poking at, and seeing if I
can make a little contribution here or there.
Dennis Sullivan once told me that he likes
to sit down a mathematician and ask them
to tell the story that weaves together all the
papers they’ve written in order. If I was to
sit down and do that then QFTs would be
part of the story.
Have you had any long periods of
time in which you were stuck on a
problem and maybe felt discouraged?
Sure, all of the time! I like to say that if
you’re not confused then you’re not work-
ing. Most endeavors aren’t quite like that,
but math research is. Fortunately, we have
the luxury in mathematics of being very
nimble, meaning that we can change direc-
tion on a dime because we don’t have to
buy expensive equipment or manage large
teams as researchers in laboratory sciences
do. We also don’t have to plan ahead nearly
as much – there are astronomers now who
are planning telescopes that won’t be online
for fifteen years.
As an aside, when I was in my last year
of graduate school, Ed Witten came to MIT
and lectured on a new formula for global
anomalies. I understood something about
it, and where it fit into some global analysis.
When I talked to my advisor, Is, about it he
told me to drop everything and work on this.
That made my thesis submission late! Also,
just prior to that, I had submitted a post-
doc application to the NSF. A year later,
I wasn’t working on any of the problems I
had proposed in my application since I was
following up the new ideas about anomalies.
Naturally, I was a little bit concerned and so
asked my elders about it. They told me not
to worry, which was a lesson to me about
the nimbleness of mathematics research and
how we should embrace that.
Coming back to the question, there are
basic techniques you can apply to get un-
stuck. Polya’s book How to Solve It has
good pointers that apply just as well to
research as any other kind of mathemat-
ical pursuit – break problems down into
smaller problems, look for related problems,
change the problem, play with the problem,
etc. The goal in general is not to spin your
wheels. This is one of the many benefits of
16
having great colleagues and collaborators.
I’ve found that I can easily start spinning
my wheels when I’m working by myself. You
get stuck on one idea, one thought, one di-
rection. You’re too stupid – you can’t do
it. So you ask someone else. And some-
times, just a few minutes of conversation
can clear things up. Of course, working
with collaborators is also lots of fun. You
need to spend many long hours working by
yourself to produce mathematics, but the
social aspect shouldn’t be understated.
Some strategies for unsticking yourself in-
clude working on different things in parallel,
focusing on administrative work, and simply
deciding to pick up a book and learn some-
thing completely new for a change. Another
important strategy is to focus on teaching.
As I said earlier, mentoring the next gen-
eration is very important. And one of the
many benefits of teaching is that it provides
you with a routine and the psychological
satisfaction of knowing that you’ve accom-
plished something no matter how research
is going. There are challenges there, too, so
everyone has to find their own balance.
To close, what do you enjoy most
about being a professor?
There are many things I enjoy about be-
ing a professor. It is a charmed life, with
a tremendous amount of freedom to pursue
research I’m interested in and to teach in
a flexible way. There are enjoyable inter-
actions with colleagues (both here at UT
and elsewhere), grad students, and younger
students looking for mentorship. There re-
ally are so many positives. The situation
is largely analogous to that of the artist or
novelist who has a position at a university
teaching and such but also produces their
own creative work.
STUDENT ARTICLE
4
Building and Solving Differential
Equations Using Electronic Circuits
Vic Frederick, ’20
Introduction
Ordinary Differential Equations (ODEs) create a relationship between a function of
one variable and its derivatives. Circuits can be used to build and solve these ODEs.
Such circuits have historically been called analog computers. Primarily used to calculate
projectile motion during WWII, these analog computers now find their home in control
systems. This article will go into detail on how circuits capture differential behavior, how
to turn a differential equation into a circuit, and how to solve differential equations using
circuits.
Building Blocks
Kirchhoff’s circuit laws provide the first building block for modeling differential equa-
tions using circuits. These two laws relate current and voltage to the graph-like structure
of a circuit, with current being a quality of edges and voltage a quality of vertices.
Kirchhoff’s Voltage Law says voltage sums to zero in a cycle (i.e., an ordered collection of
vertices beginning and ending with the same vertex), while Kirchhoff’s Current Law says
the weighted sum of current relative to a vertex is zero (i.e., the sum of incoming current
equals the sum of outgoing current). In engineering parlance, vertices are the components
of a circuit and nodes are the places where two or more components meet. Kirchhoff’s
18
circuit laws therefore allow us to construct differential equations by inspecting cycles and
nodes.
Figure 1: Linear Circuit Components
The next step is characterizing the electric components shown in Figure 1. Current
through a component can be related to the voltage across it. For a resistor with constant
resistance R, the relationship between voltage and current is described by Ohm’s Law:
V = RI (1)
Current through a component can also be related to the change in voltage across it.
For a capacitor with constant capacitance C, the relationship is described by
I = CdVCdt
(2)
Here, VC is the voltage across the capacitor. Sources, the remaining components, either
add voltage across a node or specify a current.
Building Differential Equations
Consider the following series circuit, so called because it consists of a single cycle.
Figure 2: RC Circuit
19
Kirchhoff’s Voltage Law and Ohm’s Law together give
0 = VR + VC − V1 = RI + VC − V1 (3)
The current is constant throughout the series circuit because each node is the junction
of only two vertices. Using Equation (2) to write I in terms of VC and solving for V1gives the first-order linear non-homogeneous differential equation
V1 = RCdVCdt
+ VC (4)
We can perform a similar procedure to find a second-order linear non-homogeneous
equation characterizing the circuit shown below.
Figure 3: Second Order RC Circuit
Kirchhoff’s Current Law and Equation (2) together give
I1 = IC1 + IC2 = C1dVC1
dt+ C2
dVC2
dt(5)
Kirchhoff’s Voltage Law and Ohm’s Law together give
VC1 = VR + VC2 = RC2dVC2
dt+ VC2 (6)
Substituting this into Equation (5) and collecting like terms gives
I1 = C1d
dt
[RC2
dVC2
dt+ VC2
]+ C2
dVC2
dt= RC1C2
d2VC2
dt2+ (C1 + C2)
dVC2
dt(7)
We now have two examples of recreating differential equations using circuits, and thus
two cases in which we can approximate solutions by taking measurements. Namely, we
can use an oscilloscope to read the relevant capacitor voltages and thereby measure
solutions to Equation (4) and Equation (7). Unfortunately, this approach has two key
issues.
1. Linear electric circuits will not have outputs that rise above the input – i.e., the
waveform will be chopped off above the maximum of the non-homogeneous side of
the equation.
20
2. Such circuits are often difficult to synthesize even if they are easy to analyze.
Fortunately, there is an electronic component known as an Operational Amplifier that
allows us to address these issues.
Operational Amplifiers
Consider the following schematic.
Figure 4: Operational Amplifier with Supply Lines
Ideally, Operational Amplifiers (Op-Amps for short) keep the same voltage at nodes
A and B shown in Figure 4. This is done using feedback from Vout. V+ and V− are the
power supply lines, which limit Vout via
V− ≤ Vout ≤ V+ (8)
Staying within these limits is important for minimizing distortion. For the sake of
simplicity, we will largely ignore supply lines when building equations. One big advantage
of Op-Amps is that they allow us to neatly package the available operations of sums,
coefficients, and derivatives – the building blocks of linear differential equations. This is
illustrated by using Kirchhoff’s Current Law at node A of the figures below (note that
node A is not explicitly marked but can be identified by comparison with Figure 4).
Figure 5: Coefficient Circuit
Vout = −R1
R2Vin (9)
21
Figure 6: Summing Circuit
Vout = −R1
R2Vina −
R1
R3Vinb (10)
Figure 7: Differentiating Circuit
Vout = −RC d
dtVin (11)
Selecting appropriate resistances and capacitances therefore allows us to control any
coefficients that appear. To simplify the process of building equations, we will treat these
circuits like black boxes and adopt the notational shorthand shown in the following figure.
Note that each operational block changes the sign of its input, which is why such circuits
are called Inverting Amplifiers.
Figure 8: Circuit Shorthand
The symbol A in the Coefficient Circuit represents a choice of coefficient, which should
be clear in context. Because we are dealing with an Inverting Amplifier, the output is
−A times the input. Now, let’s put these tools to use. Consider the following differential
equation.
Vin = Bd
dtVout + Vout (12)
22
Here, Vin is the input voltage, Vout is the desired output voltage, and B is a constant.
A first attempt at representing this equation is as follows:
Figure 9: ODE Block Representation 1
When reading these sorts of diagrams, note that all inputs enter the left of a block
and all outputs exit the right of a block. The result of such an analysis is
Vout = −Vin +d
dtVout (13)
A slight modification to account for signs and the coefficient on the derivative yields
the desired equation.
Figure 10: ODE Block Representation 2
To close, try writing out the equation associated to the following diagram. You will
see just how much Op-Amps simplify the design process
Figure 11: Differential Equation Solver Implemented with Operational Amplifiers
SPECIAL FEATURE
5
A Word from the Students
David Green is a third-year undergrad math major at UT interested in representation
theory and the theory of tensor categories. Here, David speaks on his REU experience.
I attended the Summer 2018 REU at Texas A&M University, working under Professor
Eric Rowell in the Mathematics Topological Computation group. Since I had no prior
research experience, the change in both pace and content was, to say the least, somewhat
jarring. Learning primarily from papers instead of textbooks and solving a problem over
a couple of months instead of somewhere between a couple of hours and a day presented
me with some newfound challenges. I had to brainstorm new strategies for categorizing
information which didn’t neatly build off of things I already knew. Not making progress
on a problem for a week (or sometimes much longer. . . ), though relatively typical within
research mathematics, was psychologically taxing for me. In my own experience, I’m used
to solving problems I’m given almost immediately. For the REU, despite having only a
single problem and 8 weeks in the program to work on it, I still have some calculations
to do before I can really say I’m done. Even though I was learning a lot and working
hard most of the time, I always felt like I hadn’t done anything – an experience which
was difficult for my self-image. At the end of the day, though, I got through it and have
a better idea of what doing “real” math is like. I feel more confident about applying to
graduate schools for their PhD programs, which I count as a success for the REU.
As to the actual content of my research, I helped with a classification program for rank
6 modular tensor categories. I succeeded in showing that in the case where such objects
have a certain type of Galois group, all possible instances are in fact already known up
to a certain equivalence (subject to the calculations actually checking out).
24
Tom Gannon is a third-year math grad student at UT interested in algebraic geom-
etry, representation theory, and number theory. Here, Tom reflects on how his perceptions
of math have changed since his undergrad years.
One thing that has changed drastically for me is my concept of what math actually
“is.” As an undergrad, I assumed that I would master one area (say algebra) then master
another area (say algebraic geometry), continuing until everything would all come together
and then I would finally be able to produce research. I realized sometime during my
second or third year of grad school that math doesn’t really work like that. Quite often,
theories are built upon results that people haven’t mastered yet. The most “cutting edge”
math research would only become accessible after more than 7 years if you started with
the foundations, filling in all the little holes one at a time. Instead, you learn to build a
working knowledge of everything that serves as a guide.
To give an example, for the first time in my life I am participating in a learning
seminar whose explicit goal is not to achieve complete mastery of the theory of “infinity
categories,” whatever that means. Instead, the goal is to get a broad sense of the outline
of the theory. Before, I would have scoffed at this idea, but now I see that such an
approach is sufficient and even beneficial for doing research.
Another thing that has changed for me is my perspective on the role of intelligence in
math. Perhaps due to things going well for me in high school math-wise, I spent a lot of
time as an undergrad thinking that I was “naturally” good at math and any struggle
was a sign that I wasn’t good enough. I now look at things in totally the opposite way.
Math is a skill that can be developed with practice – what really matter are your passion
and drive. If I could give one piece of advice for soon-to-be grad students, it would
be that you’re not as “far away” from the field as you might think. “Natural talent”
is almost worthless for doing research-level mathematics, which usually involves many
medium-depth insights into a problem rather than one big magical insight that could
only come from your “intelligence.”
Challenge Problem
Here is a version of Fermat’s Little Theorem for matrices. Let n be a positive
integer, I the n× n identity matrix, A an n× n integer matrix, and p a positive
prime. Let tr(A) denote the trace of A, given by the sum of the diagonal entries
of A. Show that
tr(Ap) ≡ tr(A) (mod p).
Think you have a proof? Send your answer to [email protected].
SPECIAL FEATURE
6
Something to finish on. . .
Some of us here at UT had the pleasure last fall of taking Prof. Gordon for M382C
Algebraic Topology. A jovial man with a quriky and ever-present sense of humor, Prof.
Gordon produced for us a number of memorable quotes and expressions. We record a
few of them below.
♣ Sorry that I’m always chuckling to myself. I guess I’m just easily amused.
♣ I used to give problems that were wrong on purpose. . . because that’s how life is.
♣ A chain complex is. . . well, exactly what it is.
♣ Once you’re confident it’s right, it all works out!
♣ You never need to prove anything, you just stare at it and say “yes, yes, yes!”
♣ This is all an illusion, we haven’t proven anything!
♣ I’m going to chicken out on the full proof of excision. Besides, you guys know how
to read.
♣ At least it’s true, regardless of whether or not it’s useful [regarding reduced
homology].
♣ You all know how comparing different groups goes in an introductory algebra course.
You have four groups and the question is which groups are isomorphic. The answer
is: yes, yes, no, and yes.
♣ What are we trying to do here? I’ve already forgotten [stated at the end of a rather
short proof].
Thanks for reading!