**The story of moonshine, part I: symmetry, number theory, and the monster



Suggested background: none!

1. Introduction
2. Symmetry
     2.1. Wallpapers: an amuse-bouche
     2.2. Groups: the algebra of symmetries
     2.3. Classification: finding a periodic table of groups
     2.4. Representations: how groups act in different dimensions
3. Numbers
     3.1. Fermat: the math world’s biggest troll?
     3.2. Modular forms: the fifth elementary operation of arithmetic
4. Moonshine

1. Introduction

When I tell people I study moonshine, it usually invites puzzled looks. In fact, the field remains relatively obscure and unknown even to seasoned mathematicians and physicists. Of course, this is a travesty; the story of moonshine is beautiful, and it involves insight drawn from a century’s worth of ideas. And with just a little bit of work, anyone can appreciate how elegant and bizarre it is.

People usually joke that the study of moonshine was initiated with mathematician John McKay’s observation that

\displaystyle 196884= 196883 + 1.

Profound, no? To understand why this simple identity perplexed mathematicians, consider the dilemma our friend Hurley faced in the show, Lost. Some time after winning the lottery with the numbers, 4 8 15 16 23 42, he found that these exact numbers formed the passcode to a computer on a magical island which was responsible for staving off an apocalypse (or something like that). Indeed, a natural question is, “Why would these special numbers show up in two places that appear to have nothing to do with each other?”

Moonshine is analogous. The number on the left, 196884, is a very distinguished number which naturally appears when studying mathematical objects called modular forms. The numbers on the right, 196883 and 1, appear when investigating properties of the Monster group and the dimensions it acts in (we will define all these terms soon enough). What is more, infinitely many such relationships were discovered, and there was no reason to suspect that the different branches of mathematics involved had anything to do with each other, at least not in the way suggested by these equations.

Ultimately, I think the writers of Lost offered up some sort of painfully contrived explanation for the lottery numbers involving smoke monsters and pergatory (or not, there was really no resolution to that show). It turns out that the (partial) explanation for moonshine is only slightly less bizarre: the connection between the Monster group and modular forms can be seen via a particular string theory which describes particles whizzing around and interacting on a 24 dimensional doughnut-shaped space.

This all motivates giving moonshine a logo (the idea for which I am shamelessly stealing from Jeffrey Harvey):

In the diagram, each dot (we’ll call them vertices) is meant to represent a different ‘mathematical object’; they are the group, the modular form, the algebra, and the string theory. The lines themselves (let’s call them edges) are the relationships between these objects. Finally, the overall tetrahedron will represent the abstract, theoretical structure that arises when you collectively consider all of these connections together: moonshine!

I’ll try wherever possible to inject a bit of philosophy into the discussion, but this will be made difficult by 1) my own ignorance and 2) the infancy of the field — we still don’t have a satisfying framework explaining why moonshine works the way it does. In the least, I hope you’ll get a visceral sense for how strange and unexpected the universe can be, even the little universes that exist perhaps only in our imaginations.

To make this enjoyable for the lay person as well as the more technically inclined reader, the main narrative will be an intuitive, qualitative discussion, and the rigorous formulations of the concepts will be available on the side. At times, I will bend the truth, or completely break it — this is the price of maintaining some degree of simplicity, so I apologize to any mathematicians in the audience in advance! Here’s the plan, split across two blog posts.

Part I
1) I’ll start with a basic discussion of symmetry. To ground ourselves, we’ll study wallpapers and arrive at the notion of a ‘group’ in trying to capture our analysis mathematically.
2) We’ll leap over to the more technical, but (with a little bit of perspiration) just as pretty world of number theory and modular forms. I’m of the opinion that most mathematicians that work with these objects probably couldn’t explain to you in any intuitive sense why we should care about them or why they show up in math and physics. Because I am no better, I’ll perhaps only spend enough time here to convince you that modular forms are special, which will make their reappearance later all the more shocking.
3) We’ll finally be ready for our first glimpse at a bit of mathematical moonshine, but we’ll postpone the full discussion, which includes ideas from physics, until the next blog post.

Part II
4) I’ll give a brief overview of the physics relevant to moonshine (thermodynamics, quantum theories, etc.) and hopefully show you why some of the same ideas that we used to study wallpapers can be used to learn things about the way the universe works.
5) Crappy attempts at high level remarks.
6) I’ll explain the work I’ve been doing with Jeff Harvey on describing a new kind of moonshine and the analogies that exist between it and the old kinds.

Let’s get started!

2. Symmetry

A high level discussion of symmetry usually begins with the platonic solids, the Greeks, or some sort of appeal to beauty and elegance. It’s true of course that we could spend an eternity discussing the central position these concepts have occupied in the development of math and physics, but for our purposes, it’s best if we concretely establish the two very particular kinds of symmetry that we’ll be invoking throughout the remainder of this series.

The first kind is geometric symmetry. An example I like to have in the back of my mind is that human beings are approximately geometrically symmetric: if we ignore internal organs, and I drew a vertical line from someone’s head to her toes, her left hand side would roughly be a reflected image of her right hand side.

A closely related kind is a symmetry I’ll call physical symmetry. This is perhaps a more murky concept, but loosely speaking, we will be referring to theories and equations in physics as possessing the symmetry. It turns out that systems which possess physical symmetry also have conserved quantities (e.g. energy, momentum, etc.), an idea we’ll revisit in the next post.

2.1. Wallpapers: an amuse-bouche

In an attempt to capture the more artistically bent among us early, I’ll use wallpapers as a motivating example of geometric symmetry. We all have some visceral idea in the backs of our minds as to why exactly we should consider wallpapers as being symmetric, but let’s think about how we could communicate this precisely.

Imagine you and I had a roll of wallpaper, which repeats in the vertical direction every 1 foot:

Source

We cover your walls with it (let’s pretend for a moment that you dropped a lot of money and invested in a wall which is infinite in extent), finish the job, and are quite pleased with ourselves. But little did you know — I’m an evil genius and have decided to play a prank on you! While you’re off to the bathroom, I slide the entire wallpaper up exactly 1 foot and then restick it onto the wall.

Alright, this is not much of a prank… actually when you return your world is completely unchanged; because I slid the wallpaper up by exactly the distance in which it repeats, you can’t even tell that I’ve done anything at all! (If you don’t quite see it, scroll down a bit and press the up arrow on the applet. Does the picture look different after the animation has finished?)

Because this is the case, we say that the wallpaper is symmetric under sliding, or more formally that it is symmetric with respect to translations. Because it repeats, I’m able to slide it along one direction when you’re not looking and there’s nothing you can do to be able to tell that I’ve changed it at all. This is the definition we’ll use for symmetry.

Definition: An object is said to be symmetric under a particular transformation (e.g. sliding) if the only way another person can tell that I have performed this transformation on the object is by catching me in the act.

Of course, translations aren’t the only type of symmetry this wallpaper has. I can also rotate it in a couple of ways, and reflect it along different lines. You can visualize a few of these below.

Note that the grid lines I placed are just to make it easier to see that the wallpaper really hasn’t changed at all after each of the transformations is performed on it; to see this, just look at the inside of one of the parallelograms, before and after.

Also, as an exercise, you might try to discover which symmetries I was too lazy to implement. They’re certainly not at all there!

Now, let’s say that you had watched me slide the wallpaper behind your back after all. There is no need for worry; you can always ‘undo’ my prank. In this case, you could just slide the wallpaper back down 1 foot, but if I had rotated it 60{^{\circ}} clockwise, then you could also just rotate it 60{^{\circ}} counterclockwise. This is a particularly nice feature of symmetry transformations: any symmetry can be undone by another symmetry. More formally, every symmetry has an inverse transformation, which is also a symmetry.

Definition: The inverse of a symmetry transformation is just the symmetry transformation which undoes it by bringing the object back to its initial configuration. Every symmetry has an inverse!

Another nice feature of symmetries is that you can always combine them. If I give you two different symmetry transformations, you can always obtain a third by doing one and then doing the other, one after another. For example, one can combine a 60{^{\circ}} rotation with itself to obtain a 120{^{\circ}} rotation. This is usually referred to as composition of symmetries, but I will sometimes also use the word multiplication synonymously for reasons that will become clear in a moment.

Finally, a trivial but important observation is that just leaving the wallpaper alone is a symmetry transformation as well. We’ll call this the identity symmetry.

An aside

OK, maybe I was being a bit coy in suggesting the exercise above. There are actually infinitely many symmetries of this wallpaper. For example there are infinitely many points about which I can rotate the wallpaper and leave it unchanged. But hope is not lost; it turns out that every symmetry of the wallpaper can be obtained by taking combinations of some finite collection of symmetries. So the modified exercise might be: what finite collection of symmetries generates all the rest?

Summary: We’ve defined a symmetry to be a transformation of an object that brings the object back to itself, so that the only way someone could tell that a transformation had been performed at all is if they observed it happening. In the case of a wallpaper, we are free to slide, rotate, and reflect it along different axes, and because the pattern repeats, the wallpaper afterwards will be in the exact same configuration as before we transformed it.

2.2. Groups: the algebra of symmetries

This previous discussion suggests a more abstract, algebraic definition of symmetries. Let’s capture the main properties of symmetry that we explored above in a mathematical definition.

Definition: A group {(G,\star)} is a collection of objects (called elements), {G}, along with a way of multiplying them, {\star}, satisfying the following properties:

1) Closure: For any two objects {g} and {h} in the group, their product {g\star h} is also in the group.

2) Identity: There is an object in the group, {e}, which acts as the number {1} does in the sense that multiplying it with any other object, {g}, gives that same object back:

\displaystyle e\star g = g\star e = g

3) Invertibility: Every object {g} can be undone, or inverted, in the sense that there is always some other object, {g^{-1}}, in the group which cancels it:

\displaystyle g\star g^{-1} = g^{-1}\star g = e

4) Associativity: When multiplying three objects {g,h,} and {k} together, it does not matter whether you multiply the product of the first and second with the third, or multiply the first with the product of the second and third:

\displaystyle g\star (h\star k) = (g\star h) \star k

The idea here is that each element of a group corresponds to some kind of symmetry transformation. In this way, we are able to discuss the algebra of symmetries without lugging around a bunch of ideas having to do with geometry. Algebra is easy!

This is what math is. We found some kind of interesting phenomenon in the real world (symmetry) and we captured its essence in an abstract definition (groups). Now any statement we make about groups is really a statement about symmetry, and in this way, we’ve provided a framework that affords us the ability to rigorously investigate the properties of symmetry. So let’s explore this rich structure with some examples of groups.

Here’s an especially simple group to get us started: it has only the elements {1} and {-1} in it and the group operation is just ordinary multiplication. We can summarize this in a group table.

What kind of symmetry does this correspond to? Well, human symmetry for one! We can imagine {-1} as abstractly representing a reflection which swaps our right and left half, and {1} as not doing anything. As a sanity check, we know that if I perform two reflections on you, one after another, you will return to your normal unreflected self. In mathematical terms, this is the statement you learned in 5th grade,

\displaystyle (-1)\cdot (-1) = 1.

Said in a group theoretic spirit, reflecting twice is the same thing as doing nothing to you! This group, called {\mathbb{Z}_2}, shows up all over the place in physics — for example some theories are symmetric under the reversal of time (flip time twice and you end up with time flowing in the direction it started).

There is maybe another group with two elements that is familiar. This time the group has elements {\overline{0}} and {\overline{1}} (I’m putting hats over these elements just to emphasize the fact that {\overline{1}} is different from the {1} from the previous group), and the multiplication is given by addition modulo 2.

Does this group give us anything new? In some sense, we would undoubtedly think that this one is different from the previous one we wrote down, but there is some sense in which they’re actually exactly the same. The key to understanding this is that {1,-1} and {\overline{0},\overline{1}} are just symbols — in fact, if we take this new group table defined with addition modulo 2, erase the {+} and write down a {\cdot} in its place, and erase all the instances of {\overline{0}} and {\overline{1}} and write {1} and {-1} in their place respectively, then we recover the exact same group table as before! In other words, the way {\overline{0}} and {\overline{1}} add together is the same way {1} and {-1} multiply.

Definition: Two groups {G_1} and {G_2} are said to be isomorphic if there is a one-to-one correspondence of symbols in {G_1} with symbols in {G_2} such that replacing the symbols which appear in {G_1} with their partners in {G_2} recovers precisely the multiplication table of {G_2}.

This is another common feature of mathematics. When we make definitions, there are often hidden redundancies, multiple objects that on the surface may look different, but are actually exactly the same up to relabeling. It turns out that there is only one group with two elements in it up to isomorphism, and we’ve written it above in two different ways. From now on, we’ll just take isomorphism to be our definition of equality — in other words, we’ll consider two groups to be different if and only if they’re not isomorphic to one another.

Summary: We captured the essence of symmetry in an algebraic definition, the group, and saw that groups are completely specified by their multiplication tables. We also defined the notion of isomorphism, which is a more useful notion of equality between groups because it disregards artificial differences between groups, like what symbols we choose to write them down with.

2.3. Classification: finding a periodic table of groups

It’s a common principle of math and science that once a definition is made, or a phenomenon observed, we ask ourselves, “What are all the different kinds of objects that fit into this definition?” For example, chemists might be concerned, indirectly at least, with what all the different types of molecules are. Of course this question seems very open-ended and difficult to answer as stated, but maybe there is a simpler question whose answer gets us most of the way there.

We learned in high school chemistry that molecules have atoms as their building blocks, so if we figure out what all the possible elements are, then we’ve made a significant amount of progress in our classification problem. And of course, chemists have given us such a classification — the periodic table of elements!

Mathematicians took a similar approach to answering the question, “What are all the possible finite groups we can write down?” The analog of ‘atoms’ here are called simple groups, which I may also refer to as atomic groups to make the analogy between group theory and chemistry more explicit. It turns out that even by considering these simpler groups, the answer to this question occupied the efforts of mathematicians for the greater part of the last century, and the classification now is spread out over thousands of pages in the literature. Here’s the statement of the classification of finite simple groups:

Theorem: Every finite simple group either belongs to one of 3 families (each family consisting of infinitely many finite simple groups), or is one of the 26 outlier groups, called the sporadic groups.

Of these 26 sporadic groups, one stands out. The Monster group ({\mathbb{M}}) is the largest finite simple sporadic group, weighing in with a whopping {8\times 10^{53}} elements! Imagine writing down the group table for that… In fact, it’s so big and monstrous that it gobbled up all but 6 of the other 25 outlier groups. That is to say, the structure of most of the other sporadic groups can be, in some sense, found within the Monster. One group in particular that can be found inside the belly of the beast and will be relevant for our story later is the Thompson group.

In my head, the role of the Monster group is played by Monstro the Whale (from Pinnochio) and so it naturally follows that the role of the Thompson group be played by Geppetto, who unfortunately spent some time in Monstro’s gastro-intestinal tract. Feel free to discard this silly mnemonic.


Summary: Just like chemists classified matter with the periodic table of elements, mathematicians have classified all the finite groups by considering simple groups. In this classification, there are 26 distinguished sporadic groups, 2 of which will be relevant for our story: the Monster group, which is the largest of the sporadic groups, and the Thompson group, which lives inside the Monster.

2.4. Representations: how groups act in different dimensions

Many of you have probably taken a linear algebra class in college or in high school. This is because, as a branch of mathematics, linear algebra is ubiquitous — it’s enjoyed tremendous application in nearly every discipline you can think of. Why? Because almost everything is approximately linear (for example, zoom in really close on any curve and it will basically look like a line) and linear algebra is an extremely well-developed and well-understood theory.

On the other hand, group theory is in general very difficult. To this end, mathematicians have found it incredibly fruitful to reduce problems of group theory to problems of linear algebra. Though this so-called representation theory is one of my favorite mathematical subjects and is an incredibly useful tool for understanding theoretical physics, we’ll try not to veer too far off course and will just present the minimum amount needed to understand moonshine.

So how exactly do we study groups in terms of linear algebra? Remember that the object that is most central to linear algebra is the matrix:

\displaystyle M = \left(\begin{array}{cccc} a_{1,1} & a_{1,2} & \cdots & a_{1,m} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n,1} & a_{n,2} & \cdots & a_{n,m} \end{array}\right)

and that two matrices can be multiplied together to produce a third matrix. The exact way this is happens is not so important for our purposes, but if you’re curious you can just punch ‘matrix multiplication’ into Google and get millions of results.

One way to think about a representation of a group is as a box which assigns a matrix {M(g)} to every group element g:

One shoves group elements into one side of this box and the box spits back a matrix which is meant to represent that group element. But this isn’t just any old box… the way this box represents elements preserves the basic structure of the group. The high level idea is that the matrices satisfy the same multiplication rules as the group elements themselves, and so we’ve reincarnated certain features of the group in a linear algebraic setting.

In order for this to happen, it shouldn’t matter whether we first combine the group elements and then look at the resulting matrix, or first shove group elements through the box and then combine their corresponding matrices; the result should be the same in either case! Pictorially,

To give a name to this, we say that representations satisfy the homomorphism property. If all this seems hopelessly complicated, I don’t blame you. The important thing to take away though is just that a representation is an assignment of a matrix to each element of the group in a way that captures features of the group that we’re interested in.

Mathematical definition of a representation
A representation of a group {G} is a homomorphism {\rho:G\rightarrow \mathrm{GL}(V)} from the group to the automorphism group of some vector space.


An example of a representation
Since {{\mathbb Z}_2 = \{1,-1\}} is our favorite group, we will give an example of one of its representations to make the above a bit easier to digest. All we need to do to define a representation of {{\mathbb Z}_2} is define two matrices associated with {1} and {-1}, {M(1)} and {M(-1)}, and make sure they have the homomorphism property. I claim that the assignment

{M(1) = \left(\begin{array}{rr} 1 & 0 \\   0 & 1 \end{array}\right),  \ \ \   M(-1) = \left(\begin{array}{rr} 0 & 1 \\   1 & 0 \end{array}\right)}

constitutes an honest-to-God representation!

We described above how {(-1)\cdot (-1) = 1}. So for example, the homomorphism property demands that {M(-1)\times M(-1) = M(1)}. If you know how to multiply matrices, then you can easily verify that this is true:

\displaystyle M(-1)\times M(-1) = \left(\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right)\times\left(\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right) = \left(\begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}\right)

\displaystyle M(1) =\left(\begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}\right)

Noticing that the right hand sides of both lines are the same is all it takes to prove that {M(-1)\times M(-1) = M(1)}. In principle, one would need to go through all combinations of group elements and verify that they satisfy the homomorphism property, but since {(-1)\cdot (-1) = 1} is the only interesting case we’ll just leave it at that.

If you’ve made it this far, pat yourself on the back. We have a little farther to go, but it won’t be long before we’ve reached the moonshine.

So once again, notice that we’ve defined another abstract structure: the representation. As with groups, we’re led to ask a natural question: “What are all the representations I can write down of a particular group?” If you read the example above, we wrote down one representation of {{\mathbb Z}_2} above. What do all the other representations of {{\mathbb Z}_2} look like?

Remember that in asking the analogous question for molecules, chemists studied the atom. Here, we have the exact same thing. There is a notion of an irreducible representation; the irreducible representations of a group constitute those representations out of which all the others can be constructed, just as atoms constitute the building blocks out of which all other molecules can be constructed.

Now, the chemists invented the periodic table of elements to further assist in their study of matter. Can we come up with an analogous table for representations of a particular group? The answer is yes, and it will be extremely useful for us when we encounter moonshine later on.

It turns out that there is an incredible amount of information encoded in the trace of a matrix, which is defined as the sum of the elements along the diagonal. So for example,

\text{trace of } \left(\begin{array}{rrr} 2 & 3 & 4 \\ 5 & 6 & 0 \\ 1 & 8 & 8 \end{array}\right) = 2+6+8=16

There is enough information baked into the trace that we will use it to characterize representations entirely.

Here’s what we’ll do: let’s say we’re given some group and we want to create its representation table. Each row will correspond to an irreducible representation {M}, and each column will correspond to a group element {g}. Each entry in the table will be the trace of the matrix {M(g)} assigned to that group element under that representation. In terms of the box analogy, the columns correspond to group elements, and the rows to different boxes. Each entry in the table is then the trace of the matrix that you get out when you shove that group element into that box. Simple enough, no?

For example, here’s the first bit of the representation table of the Monster group.

The first irreducible representation (or first box) I denote with an {M_1}, the second irreducible representation is denoted with an {M_2}, etc. As a final piece of terminology, we’ll define the dimension of a representation to be the size of the matrices that it maps group elements to. The dimension of the representation is equivalently the trace of the matrix assigned to the identity element, 1A, of the group. So the dimension of the second irreducible representation is 196883. Remember this number!

Summary: To learn more about groups, we decided to cast their study into the framework of linear algebra. We defined a representation of a group simply as an association of a matrix to every element of the group in a special way that preserves the group structure. In asking what all the representations of a group are, we learned about its irreducible representations, which are the representations out of which all others can be built. We summarized this information in a representation table. Every entry of this table is the trace of some matrix — if we are in the column corresponding to group element {g} and representation {M} then the matrix is the one that {g} maps to under the representation {M}. We will see these important numbers come up in the study of moonshine.

3. Numbers

If I was asked what I think the most difficult branch of mathematics is, I would, without hesitation, say that it’s number theory. I always found it strange that the most elementary concepts (adding, counting, etc.) could often times be the hardest to study. For example, one of the most notoriously challenging problems mathematicians faced for hundreds of years was the proof of Fermat’s Last Theorem. Yet the statement of the problem is so simple that one could explain it to a clever middle school student!

The main actors from number theory that will be useful for understanding moonshine are the modular forms. However, they’re abstract, absurdly so, and in order to get you motivated to study them, I want to convince you of their importance by discussing one or two of their innumerable applications. In the process, we’ll learn some cute bits of history, and you’ll have a story or two to tell at your next dinner party!

3.1. Fermat: the math world’s biggest troll?

To get us started, let’s review a bit of middle school math.

Source

Recall that the lengths of the sides of a right triangle are related by a simple formula,

\displaystyle a^2 + b^2 = c^2,

where of course, {c} is the hypotenuse (the diagonal). One can ask, “Are there integer solutions to this equation?” The answer is of course yes! For example,

\displaystyle 3^2 + 4^2 = 5^2.

Another famous one is obtained by choosing {(a,b,c) = (5,12,13)}. These are called Pythagorean triples, and no doubt you’ve had to memorize one or two at some point in your life. Note when we say integer solutions we are excluding cases where {a, b, c} are fractions or have decimals. As suggested by the name, Pythagorean triples have been studied since the Greeks, and even since the Babylonians, and have enjoyed immense importance ever since.

Now what if we asked a possibly harder question. To state the problem, let {n} be some number that is greater than 2 (it could be {3}, 4, or 5,000,000 if you want). The problem is to find integer solutions {a,b,c} to the equation

\displaystyle a^n + b^n = c^n.

Fermat boldly decided this issue by claiming that you cannot! There simply do not exist any integer numbers {a,b,c} which satisfy the equation when {n} is greater than 2. In 1637, Fermat wrote this ‘theorem’ in the margin of his copy of the Arithmetica, saying,

It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second, into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.

I put ‘theorem’ in quotations because in order for something to be considered a theorem, it needs to be proven! Fermat did no such thing. In fact, the conjecture eluded mathematicians until 1994 when Andrew Wiles, using totally modern methods and devoting some seven years of his life in total secrecy to solving the problem, finally managed to churn out a proof. This makes it a bit difficult to believe that Fermat actually produced a proof in two minutes sitting in an armchair. I think most historians chalk this ‘marvelous proof’ up to a simple brain fart.

Nonetheless, it will be insightful for our purposes to analyze Andrew Wiles successful proof of Fermat’s Last Conjecture. The main engine which drove the proof forward was what has now become known as the modularity theorem. In layman’s terms, the modularity theorem established a correspondence between two branches of mathematics which were previously thought to be unrelated to each other. The first branch is the study of elliptic curves, like the one below.

Source

In general, we can just think of an elliptic curve as the graph of an equation of the form

\displaystyle y^2 = x^3+ax+b

which also satisfies some other properties which are irrelevant for our purposes.

The other branch of mathematics is the study of modular forms, one of the main number theoretic actors in our drama (we’ll save the exact definition of what these are for the next section). Then this modularity theorem can be stated as follows:

Theorem: To each elliptic curve one can associate a unique modular object.

Sounds simple enough, but in fact the proof of this theorem was thought to be essentially impossible, or at least beyond the reach of mathematics (this was around the 1980s).

But what is the relevance of the modularity theorem to Fermat’s Last Conjecture? The answer is that mathematicians already knew a ‘proof by contradiction’ of Fermat’s Last Conjecture, but it relied on the modularity theorem being true. In other words, if one proved the modularity theorem, one would obtain Fermat’s Last Theorem as a corollary.

In a proof by contradiction, one assumes the opposite of what they want to prove, and demonstrates that it leads to an absurdity or a contradiction. For example, Sherlock Holmes may argue, “There is video footage of the suspect in his own home 20 minutes before the crime was committed. Assuming he murdered John Doe as you claim, investigator, then he would need to have driven his car from his home to the crime scene at an average of 250 miles per hour! This is an absurdity because even the fastest car in the world is not capable of traveling at these speeds, especially in New York traffic. Therefore, I claim, he is innocent!” Applying a similar argument structure to Fermat’s Last Conjecture, if one assumes that there are integer solutions to the equation {a^n + b^n = c^n} for some {n} greater than 2 (this of course being the opposite of Fermat’s Last Conjecture) then it can be shown that there exists an elliptic curve to which one cannot associate a unique modular object, which is absurd if you believe that the modularity theorem is true. In other words, if Fermat’s Last Conjecture is not true, then neither is the modularity theorem!

Thus, if one could prove the modularity theorem, then one would obtain also a proof of Fermat’s Last Conjecture. This is precisely what Andrew Wiles accomplished in 1994.

Summary: Modular forms are important!

3.2. Modular forms: the fifth elementary operation of arithmetic

Now, we have seen that modular forms enter into some pretty important mathematics (at least important to the mathematicians). In fact, modular forms show up all over the place, throughout the study of numbers. They’re so ubiquitous that famous mathematician Eichler is quoted as saying,

There are five elementary arithmetical operations: addition, subtraction, multiplication, division, and… modular forms.

So, if you will allow me, I can finally get around to explaining what they actually are.

Simply stated, a modular form is a function which enjoys a high degree of internal symmetry. Different modular functions will enjoy different amounts of symmetry, so it is convenient to drag this information along when we talk about a modular function in the form of a group. Let’s make a (very simplified) definition.

Loose definition: We say that a function is a weightless modular function with symmetry group {\Gamma} if it is symmetric with respect to all the symmetry transformations inside the group {\Gamma}.

If you need a refresher on group theory, go back and read the previous section. Whereas before we were talking about geometric objects as having the associated symmetry groups (humans had {\mathbb{Z}_2}-symmetry), now we are talking about mathematical objects having symmetry groups, like functions. There are still ways to visualize this kind of symmetry, but for the sake of getting to the end of story, we won’t pursue them here. However, I will say that it leads to pretty pictures like this one:

Source

For those who are interested, we give a (slightly) more rigorous definition below.

Definition of a weightless modular function
Define the upper half plane {\mathfrak{h}} to be the set of complex numbers with positive imaginary part. A function {f:\mathfrak{h}\rightarrow {\mathbb C}} is called a weightless modular function with symmetry group {\Gamma} if it is meromorphic in the upper half plane, obeys some growth conditions, and, for every matrix {\left(\begin{array}{rr}a & b \\ c & d\end{array}\right)} in {\Gamma}, obeys the transformation equation

\displaystyle f\left(\frac{a\tau+b}{c\tau + d}\right) = f(\tau).

Now, let’s denote the collection of all weightless modular functions with symmetry group {\Gamma} as {\mathrm{Mod}(\Gamma)}. In other words, {\mathrm{Mod}(\Gamma)} is just a set — the elements of this set are functions which are symmetric at least under all the symmetry transformations in the group {\Gamma}.

As it turns out, if the group {\Gamma} is specially chosen, we can find one distinguished modular function inside of {\mathrm{Mod}(\Gamma)} from which all other modular functions in {\mathrm{Mod}(\Gamma)} can be generated. We will call such a special group a genus zero group and such a distinguished modular generating function a hauptmodul.

Rigorous definition of a hauptmodul for a genus zero group
A function {j\in \mathrm{Mod}(\Gamma)} is called a hauptmodul for {\Gamma} if any function {f\in \mathrm{Mod}(\Gamma)} can be written as a rational function in {j}, i.e.

\displaystyle f(\tau) = \frac{p(j(\tau))}{q(j(\tau))}

where {p(X)} and {q(X)} are both polynomials.

We’re almost to the juicy part. Many of you who studied calculus in high school or college may remember what a Taylor series is (or the closely related Fourier series). If not, don’t worry… the basic idea is easy enough to picture. The intuition behind Fourier series is that any wave or signal can be decomposed into its ‘tones’. For example, if I recorded an orchestra playing a chord and performed Fourier analysis on the resulting sound wave, it would tell me how loud the {D} is, how loud the {F\#} is, etc. If I wanted to reconstruct the original wave, I could just take a single {D} wave and an {F\#} wave and combine them in the right proportions, dictated by how loud each part is. Pictorially, Fourier analysis would decompose the red signal into the blue ones:

Source

If I wanted to obtain the red one from the blue ones, I would just add them together. Mathematically, it might look something like this:

\displaystyle f(\tau) = a_0 + a_1q + a_2q^2 + a_3 q^3 + \cdots

Here, each term {a_nq^n} is a ‘single frequency wave’ (like the wave corresponding to the note {F\#}) and {f(\tau)} is the original signal, which we are writing down as a combination of a bunch of single frequency waves. Indeed, we can play this game for modular functions just as well as we can play it for sound waves, and we will do just this in the next section.

Summary: Modular forms are highly symmetric functions which appear all over the place. When considering the collection of all modular functions which are symmetric with respect to a particular symmetry group (we denoted this collection by {\mathrm{Mod}(\Gamma)}) we found that, if {\Gamma} is a genus zero symmetry group, every modular function in {\mathrm{Mod}(\Gamma)} could be written in terms of a special generating function, called a hauptmodul. We will see an example of a hauptmodul in the next section.

4. Moonshine

Alright, the time is finally here. We have most of the tools we need to finally understand moonshine. I was going to end on a cliffhanger and finish this post off next week, but I won’t do that to you… I’ll at least give you a little taste of the moonshine (mathematically), but we’ll have to postpone a discussion of what this has to do with physics until next week.

We ended the last section talking about hauptmoduls for genus zero groups. One very, very important modular function is the hauptmodul for a group called {\mathrm{SL}_2(\mathbb{Z})} — we’ll denote this function with {J(\tau)}. Its graph is very beautiful:


Let’s say we’re interested in learning more about this function. We can write out its ‘tonal’ expansion:

\displaystyle J(\tau) = q^{-1} + 196884q+ 21493760q^2 + 864299970q^3 + 20245856256q^4 + \cdots

There are an infinite number of tones in the expansion, but in the interest of saving cyber forests I’ve decided not to list all of them. The most attentive amongst you might recognize these numbers… if we take a look at the representation table we wrote out for the Monster group,

you’ll notice that we have some nice numerical identities.

\displaystyle  \begin{array}{rcl}  1&=& 1\\ 196884 &=& 196883 + 1 \\ 21493760 &=& 21296876 + 196883 + 1 \\ 864299970 &=& 842609326 + 21296876 + 2 \cdot 196883 + 2\cdot 1 \\ &\vdots& \end{array}

where the numbers on the left appear in the expansion of the {J}-function, and the numbers on the right appear in the table (in fact, they are the dimensions of the irreducible representations of the Monster group). Just the observation that 196884 = 196883 + 1 was bizarre enough that John Conway called the correspondence “moonshine”, which is apparently British slang meaning ‘crazy’ or ‘absurd’.

But the connection goes further. There is another genus zero group called {\Gamma_0(2)+} whose hauptmodul is

\displaystyle J_{2+}(\tau) = q^{-1} + 4372q + 96256q^2 + 1240002q^3 + \cdots

Can you guess how these numbers are related to the Monster group? Notice that if you take the numbers 1 and 196883 and look at the corresponding numbers one column to the right in the Monster representation table, you get 1 and 4371 respectively. In fact this works in general, if you just scoot one column over to the right you’ll get an analogous list of identities:

\displaystyle  \begin{array}{rcl}  1&=& 1\\ 4372 &=& 4371 + 1 \\ 96256 &=& 91884 + 4371 + 1 \\ 1240002 &=& 1139374 + 91884 + 2 \cdot 4371 + 2\cdot 1 \\ &\vdots& \end{array}

We did not develop enough tools to present the full Monstrous moonshine conjectures, but here is what we have so far:

Conjecture: For every single group element {g} of the Monster group, we can associate some genus zero group {\Gamma_g} whose hauptmodul we will denote {J_g}. This association is special because the tones of the hauptmodul,

\displaystyle J_g = q^{-1} + a_1q + a_2q^2 + a_3q^3 + \cdots

are related to the numbers which appear in the column of the Monster representation table which is associated with the element {g}.

For example, if we choose {g = 1A}, the identity element, then the genus zero group is

\displaystyle \Gamma_{1A} = \mathrm{SL}_2({\mathbb Z})

as we discussed above, its hauptmodul is just the ordinary {J}-function,

\displaystyle J_{1A}(\tau) = J(\tau) = q^{-1} + 196884q + \cdots

and its tones are related to the numbers which appear in the first column of the Monster representation table, which is the column corresponding to the 1A group element.

It’s important to note that the exact Monstrous moonshine conjectures are not only more precisely stated, but they say much more about what is going on than the above does. The entire discussion so far was just a way of giving a taste of the mathematical objects which are involved. In a week or so, we’ll see that the bridge which connects the {J} function to the Monster group is string theory.

25 Comments for “**The story of moonshine, part I: symmetry, number theory, and the monster”

Cleidin

says:

Brandon: It took me quite some time to just finish reading. I need to review a couple more times to hopefully get the jest of the analysis. But, I think I now know that the most mysterious moonshine exits in a 24 dimensional level, or did I get it wrong?

Vlad Sergiescu

says:

Beautiful!Thank you!

Question:when for the sequel?

Regards,Vlad

brandonrayhaun

says:

Thank you so much!

I think the sequel will be up in a few months. These posts take a surprising amount of time to finish!

Jamie

says:

Great post! You have a real talent for making mathematics easily digestible. Any update on when part 2 will be finished? All the best.

brandonrayhaun

says:

I really appreciate it!

With respect to part 2, I’ve drafted an outline and written up a few sections. Unfortunately, I also have a lot of other work looming over my head before I graduate in June. I hope to have it posted no later than late July though! πŸ™‚

Jonathan

says:

Another fan here. I am especially impressed by the inclusion of all the visible examples. I hope that you can find the time to finish the second part!

As a lay person, I find it intriguing that “pure mathematics” often has corollaries to both practical and theoretic physics. In a similar vain, Max Tegmark has proposed a Mathematical Universe Hypothesis which implies that mathematical structures may be the foundation of our reality. I wonder if Monstrous Moonshine or some similarly complex mathematical structure could successfully describe our universe or encapsulate a Theory of Everything. Do you think Monstrous Moonshine has any bearing on current theoretic physics?

brandonrayhaun

says:

Thanks very much for your kind words! Your feedback is deeply appreciated.

It’s a great question, and it’s one that I hope to explore in the next post. There are indeed a few connections between moonshine and physics. The main insight was that the mathematical structure of monstrous moonshine can be understood as a ‘conformal field theory’ describing physics on some 24 dimensional torus (or rather, a slight variation of a 24 dimensional torus). Witten has also proposed that this conformal field theory may have implications for quantum gravity. The link to quantum gravity occurs through something called the AdS/CFT correspondence, which is a duality that relates certain conformal field theories to theories of gravity defined on anti de Sitter space (AdS space is something like this: http://i.stack.imgur.com/KNO3R.png). I will explain some of this in simpler terms in the next blog post!

It’s unclear whether or not moonshine has proved its worth to the physics community yet. There’s been a resurgence of interest in moonshine in the past few years though, and it’s possible (maybe likely) that future work will yield more connections to physics. For instance, more examples of moonshine have been discovered — like Umbral Moonshine, Conway Moonshine, and Thompson Moonshine — and they involve similar elements, but it’s unclear how the physics enters exactly, if at all.

Here’s one of the reasons I find it interesting. The particular kind of conformal field theory that mathematicians used to explain monstrous moonshine is called an ‘orbifold’ conformal field theory. This was something that did not quite exist yet in the physics world at the time. But it is now a very widely used construction in string theory. In some sense, monstrous moonshine might have provided physicists with the impetus to incorporate orbifolds into their toolbelts. Now, we have several new examples of moonshine that need explanation. If the techniques physicists are currently using turn out to be inadequate for describing these new moonshine examples, what kind of new theories will we be led to invent?

Jonathan

says:

Fascinating. Again, your first part here is the BEST explanation so far for Moonshine that I have been able to find. Based on your detailed response (thank you), I expect your second part to continue in that tradition. In another lifetime (or in an alternate universe), I might have followed a different career path (not medicine) and been able to understand the underlying mathematics. Until then, I’ll have to settle for understanding your blog and maybe one day your future book on the topic πŸ™‚

brandonrayhaun

says:

Yah, it’s unfortunate that understanding the relevant math and physics takes so much time. But it’s never too late! In the worst case, if this stuff interests you, learning it could make for a fun retirement. πŸ™‚

If you want a relatively painless way to get started, I highly recommend you check out Leonard Susskind’s video lectures, which were what got me interested in physics in the first place.

http://theoreticalminimum.com/courses

If you watched a few videos a week, I imagine you’d get through them in less than a year. And in my opinion, you’ll come out having a deeper understanding of physics than most undergraduates, just sans the technical chops.

Aldo

says:

this is so interesting ! Where is part 2 ?

brandonrayhaun

says:

Thanks very much! I’ve been embarrassingly slow in getting around to part 2. πŸ˜› It will be on this site though, and I’ll also probably post it to the physics and math subreddits as well when it’s ready.

Pardha

says:

Hi Brandon, really enjoyed reading the article. Extremely well written. I was looking for part 2 of this blog but am not able to find it. Can you kindly guide me to the 2nd part please.

Best regards

brandonrayhaun

says:

I appreciate it a lot! As you can see, I have been badly procrastinating on part 2. It’s nice to see that there’s still interest in it though; maybe that will be the motivation I need to finish it! It will be posted here when it’s ready. I’m sure there’s a way to subscribe to my blog so that you’re notified once that happens.

Jon

says:

I really also enjoyed the discussion on the monster. Have a masters in math but focused on topology. It’s been a long time since I have studied it as I now practice architecture but I still love reading about it. This was the best thing that I have found so far on this topic that didn’t require me to relearn multiple semesters of graduate level algebra. Thanks! Hope you finish part two!

says:

Hi Brandon, Very nice expository article. Looking forward to Part 2. You introduced the terms modular form and modular function. Then you used the expression modular object. What’s a modular object?

brandonrayhaun

says:

Hi, thanks very much! I should have just used the word modular form actually; it is safe to replace `modular object’ with `modular form’ everywhere you see it in your head. πŸ™‚

JW

says:

Please complete part 2 Brandon. You clearly have a gift for understanding where what is obvious to the expert can be a stumbling block for interested amateurs. The animations for the wallpaper were a wonderful idea too. One of my favourite books is The Symmetries of Things by Conway et al. and I’m sure that if it had taken your approach I would have taken away much much more from it. Please complete part 2 Brandon. Then write a book.

Leave a Reply

Your email address will not be published. Required fields are marked *