# User manual | October 23 ```Lecture Notes on Mathematical Methods
2.4
2015–16
Representations of Groups
We have already mentioned that groups can be associated with symmetries, but we have to make this connection explicit in
the language of group theory. Doing so will help us flesh out the rather abstract ideas and tools we have introduced. We shall
find that linear operators on vector spaces (most often, operators on a Hilbert space) provide us with this connection.
2.4.1
What is a symmetry?
Let G be a group of linear transformations that act on some x ∈ Rn . Let us also give ourselves functions f (x) that are
square-integrable, ie., that live in a Hilbert space, L2 ≡ H, which we call the carrier space.
Definition 2.25. Let g ∈ G. We define an action from the left, [Tg f ](x) = f (g−1 x), and an action from the
right, [Tg f ](x) = f (x g), ∀ f . The set of operators {Tg } introduced in this way and which act on the functions
themselves, is also a group.
Why did we define the left action of a group G as g−1 x, and not g x? Surely, since g and its inverse are both elements of
the same group, it should not matter which we use. True, but “left” really only makes sense in relation to “right”. When we
define the right action of a group, we have to use the inverse of the element as written in the left action. But we could always
use g for any one action so long as we use g−1 for the other. . . couldn’t we?
Well, let us check whether the {Tg } do form a group with, say, group action from the right defined as x g, or, alternatively,
as x g−1 . Let us denote by Tgi gj the transformation associated with the group product pair gi gj ∈ G. Then, with gi = i and
gj = j in subscripts so as to lighten up the formalism:
Tij f (x) = f (x gi gj ) = Tj f (x gi ) = Ti Tj f (x)
which means that the T operators form a group; but what if instead:
Tij f (x) = f (x (gi gj )−1 ) = f (x gj−1 gi−1 ) = Ti f (x gj−1 ) = Tj Ti f (x)
Something awkward has happened: if we write the right action as x g−1 , the associated transformations do not form a group!
And, as you should verify, neither do they if we write the left action as g x.
So, as a matter of notational consistency, we should always write x g and g−1 x, which is indeed what BF do (without
much explanation).
Let there be a linear operator Ax such that, ∀ f ∈ H, [Ax f ](x) = h(x), where h ∈ H. Then we transform Ax under G
in the following way: Tg Ax Tg−1 .
Definition 2.26. When Tg Ax Tg−1 = Ax , ∀ g ∈ G, Ax is said to be invariant under the action of the group.
If also Tg f (x) = f (x), we often say that f is invariant under G itself as well.
Since the condition for invariance can also be written as Tg Ax = Ax Tg , ∀ g ∈ G, then an operator that is invariant under a
group of operators must commute with all the operators in that group.
Example 2.4. The group of translations
Let a ∈ R3 . Then {Ta }, where Ta x = x − a, form an Abelian group since Ta Tb = Tb Ta = Ta+b . Now let f
be an element in a group of analytic functions. Then:
f (Ta−1 x) = f (x + a) =
∞
X
n
1 a · ∇ f (x) = [ea·∇ f ](x)
n!
n=0
We identify Ta = ea·∇ ; indeed, [Ta f ](x) = f (Ta−1 x) = f (x + a). The operator ∇ is called the infinitesimal
generator of translations.
Now ea·∇ in its usable Taylor-expansion form plainly commutes with the Laplacian ∇2 , because all derivatives
commute. So the Laplacian is invariant under the group of translations.
If Ax has eigenvalues and eigenfunctions. and if it is invariant under G, then there should exist a set of functions {f i }
such that:
Ax Tg f i (x) = Tg Ax f i (x) = λg [Tg f i ](x)
36
Lecture Notes on Mathematical Methods
2015–16
which says that if f is an eigenfunction of Ax , so is Tg f i , with the same eigenvalue. If the eigenvalue is non-degenerate, ie.
if f is unique, then Tg f i must be proportional to f i , ie., f i is also an eigenfunction of Tg , but with some a priori different
eigenvalue also depending on g. In the degenerate case, however, given N eigenfunctions, all we can say is that the Tg f i are
linear combinations of {f j }:
Tg f i = f j Dj i (g)
(2.3)
with summation over repeated indices implied.
2.4.2
Matrix representations of a group (BF10.4)
Definition 2.27. A representation D of a group G is a homomorphic mapping onto a set of finite-dimensional
invertible matrices such that D(e) = I, the identity matrix, and D(gi ) D(gj ) = D(gi gj ), in the sense that matrix
multiplication preserves the group composition law.
If the homomorphism is one-to-one, a representation is faithful. The dimension of the representation is the
dimension of its matrices or, equivalently, the dimension of the carrier space on which it acts.
Whenever we find a set of degenerate eigenfunctions for some operator that is invariant under a group G, we expect to be
able to connect these functions to a representation of the group.
Matrix representations arise in a much more general context than symmetry. The matrices GL(N, C) can be thought of
as the set of all invertible linear transformations of a vector space of complex-valued functions V = {f (x)}, where x ∈ Rn .
If {ei } is a basis for V, then x = xi ei , where the xi are the components of f in the basis, and the subscript i on the basis
vectors specifies which vector, not with component of the vector.
Now let us simplify things a bit by taking f (x) = x. Then the left action of an element g ∈ G, expressed in terms of the
linear transformations Tg , must be written as:
Tg (x) = g−1 x = xi g−1 ei = xi ej Dj i (g−1 )
(2.4)
Only this exact definition of the associated D matrices preserves the group product of G. Indeed:
g1−1 ei = ej Dj i (g1−1 )
g2−1 g1−1 ei = g2−1 ej Dj k (g1−1 ) = ek Dk j (g2−1 ) Dj i (g1−1 )
g2−1 g1−1 ei = ek Dk i (g2−1 g1−1 )
so that D(g2−1 g1−1 ) = D(g2−1 ) D(g1−1 ), or D(g1 g2 ) = D(g1 ) D(g2 ), as required for the D matrices to have the same product
rule as the group. This is perfectly consistent with eq. (2.3) above, but now we know that eq. (2.3) corresponded to the left
action of the group, g−1 f i , which was not so obvious because of the use of the Tg operators which always act from the left.
Notice that so long as we use the last member of eq. (2.4), we can dispense with the notational constraint that demands
that we write the group’s left action as g−1 x. Indeed, if you replace g2−1 by g2 and g1−1 by g1 in the proof just above, you
will still get: D(g2 g1 ) = D(g2 ) D(g1 ).
It is an instructive exercise to show that the proper way of expressing the right action of the same group, x g, in terms of
its (right) representation D matrices is:
x g = ei xi g = Di j (g)xj ei
(2.5)
in which D acts on the the xi written as a column vector. Because of this, some people see the right action as the more
“natural” one. For a given g, the right D matrices are in general different than the left ones.
2.4.3
Non-unicity of group representations
One might hope to define an algorithm that would churn out the representation of a group. But there is no such thing
as a unique representation! Indeed, suppose we have a set of n-dimensional matrices which represent a group. It is always
possible to obtain another representation, also of dimension n, by mapping these matrices to the identity matrix. This is called
the identity representation, and it always exists. Also, the homomorphic map of the same matrices to their determinant
preserves the group product (since det (AB) = (det A)(det B), which provides another representation which this time is
one-dimensional. Of course, nobody claims that such representations are faithful. . .
Also, we can make a change of basis: e′i = ej S j i , or ei = e′ j (S −1 )j i . Then we have the similarity transformation:
′
D (g) = S D(g) S−1 , and the D′ obey the same product rules as the D matrices.
Definition 2.28. Representations connected by a similarity transformation are said to be equivalent. They differ
only by a choice of basis.
37
Lecture Notes on Mathematical Methods
2015–16
Example 2.5. Consider the continuous group of rotations around the z axis embedded into the group of threedimensional rotations. We focus on its left action and look for representations.
We parametrise a rotation by g = Rα such that Rα φ = φ − α. This would correspond to rotating the standard
basis in R3 by α, with the coordinates of a vector characterised by angle φ being mapped to the same coordinates
of that vector rotated by −α in the initial basis (often called a passive transformation). One method for finding
representations is to use eq. (2.4), which becomes in this context:
−1 Rα fi (φ) = fi Rα
φ = fi (φ + α) = fj (φ) Di j (−α)
We want to find a set of functions which transform into linear combinations of themselves under Rα . Try
f1 = cos φ, f2 = sin φ. Then:
Rα f1 (φ) = cos(φ + α) = (cos α) cos φ − (sin α) sin φ = (cos −α) f1 (φ) + (sin −α) f2 (φ)
Rα f2 (φ) = sin(φ + α) = (sin α) cos φ + (cos α) sin φ = − (sin −α) f1 (φ) + (cos −α) f2 (φ)
Compare this with Di j (−α) fj (φ), and switch the sign of α to obtain the D(α) matrix:
cos α sin α
(1)
D (Rα ) =
− sin α cos α
Well, that’s one two-dimensional representation for SO(2), and it is probably the most often used. But it is not
the only one! If we had instead chosen f1 = eiφ , f2 = e−iφ , going through the same procedure would yield
another matrix:
iα
e
0
(2)
D (Rα ) =
0 e−iα
so here is another different two-dimensional representation.
√ Or is it different? In fact, no, because the transformation S−1 D(1) S, with the single matrix S = 1i 1i / 2, diagonalises D(1) into D(2) for any angle α, ie. for
all elements of the rotation group.
But there are more: each of the linearly independent functions eiα and e−iα is also a perfectly acceptable onedimensional representation of SO(2)! Both D(1) and D(2) can be viewed as a joining of these one-dimensional
representations, which we shall call D(3) and D(4) . Obviously, there must be something special about those two.
Before we discover what it is, let us look at another instructive example.
38
```