Schubert Calculus mini-course

I’ve written a lot about Schubert calculus here over the last few years, in posts such as Schubert Calculus, What do Schubert Curves, Young tableaux, and K-theory have in common? (Part II) and (Part III), and Shifted partitions and the Orthogonal Grassmannian.

I soon found out that writing a lot about your favorite topics on a blog can sometimes get you invited to give talks on said topics. This past June, I gave one of the three graduate mini-courses at the Equivariant Combinatorics workshop at the Center for Mathematics Research (CRM) in Montreal.

Naturally, a lot of what I covered came from the blog posts I already had written, but I also prepared more material on flag varieties and Schubert polynomials, more details on the cohomology of the Grassmannian, and more problems and examples suitable for the workshop. So I organized all of this material into a single lecture notes document available here:

Variations on a Theme of Schubert Calculus

The coolest part was that the CRM had good quality audio/video setup for all three workshops, and so you can also view my lecture videos that accompany these notes at the following five links, and take the entire course yourself:

Lecture 1: Introduction, Projective varieties, Schubert cells in the Grassmannian

Lecture 2: Duality theorem, CW complexes, Homology/cohomology

Lecture 3: Littlewood-Richardson rule, Flag variety, Schubert polynomials

Lecture 4: Cohomology of flag variety, generalized flag varieties

Lecture 5: The orthogonal Grassmannian, recap

You can also find the videos for the other two mini-courses from the summer – Steven Griffeth’s lectures on Cherednik algebras and Jeffrey Remmel’s lectures on symmetric function theory – at this link.

Hope someone out there finds these lectures useful or interesting. I certainly had a blast teaching the course!

Writing polynomials in terms of invariants and coinvariants

There is a fun little fact regarding polynomials in two variables $x$ and $y$:

Any two-variable polynomial $f(x,y)$ can be uniquely written as a sum of a symmetric polynomial and an antisymmetric polynomial.

(To be more precise, this is true for polynomials over any field of characteristic not equal to $2$. For simplicity, in what follows we will assume that our polynomials have coefficients in $\mathbb{C}$.)

Recall that a polynomial $g$ is symmetric if it does not change upon permuting its variables. In this case, with two variables, $g(x,y)=g(y,x)$. It is antisymmetric if swapping any two of the variables negates it, in this case $g(x,y)=-g(y,x)$.

It is not hard to prove the fact above. To show existence of the decomposition, set $g(x,y)=\frac{f(x,y)+f(y,x)}{2}$ and $h(x,y)=\frac{f(x,y)-f(y,x)}{2}$. Then $$f(x,y)=g(x,y)+h(x,y),$$ and $g$ is symmetric while $h$ is antisymmetric. For instance, if $f(x,y)=x^2$, then we can write $$x^2=\frac{x^2+y^2}{2}+\frac{x^2-y^2}{2}.$$

For uniqueness, suppose $f(x,y)=g_0(x,y)+h_0(x,y)$ where $g_0$ is symmetric and $h_0$ is antisymmetric. Then $g_0+h_0=g+h$, and so $$g_0-g=h-h_0.$$ The left hand side of this equation is symmetric and the right hand side is antisymmetric, and so both sides must be identically zero. This implies that $g_0=g$ and $h_0=h$, so the unique decomposition is $f=g+h$. QED.

This got me thinking…

Is there an analogous decomposition for polynomials in three variables? Or any number of variables?

The above decomposition doesn’t make sense in three variables, but perhaps every polynomial in $x$, $y$, and $z$ can be written uniquely as a sum of a symmetric, antisymmetric, and… some other particular type(s) of polynomials.

Indeed, it can be generalized in the following sense. Notice that any antisymmetric polynomial in two variables is divisible by $x-y$, since setting $x=y$ gives us $f(x,x)=-f(x,x)=0$. Moreover, dividing by $x-y$ gives a symmetric polynomial: if $$h(x,y)=p(x,y)\cdot(x-y)$$ is antisymmetric, then $p(x,y)\cdot (x-y)=-p(y,x)\cdot(y-x)$, and so $p(x,y)=p(y,x)$.

Thus any antisymmetric polynomial $h$ is equal to $x-y$ times a symmetric polynomial, and so we can restate our fact above in the following way:

Any two variable polynomial $f(x,y)$ can be written uniquely as a linear combination of $1$ and $x-y$, using symmetric polynomials as the coefficients.

For instance, $f(x,y)=x^2$ can be written as $$x^2=\left(\frac{x^2+y^2}{2}\right)\cdot 1+\left(\frac{x+y}{2}\right)\cdot (x-y).$$

Now, to generalize this to three variables, in place of $x-y$, we consider the polynomial $$\Delta=(x-y)(x-z)(y-z).$$ Also consider the five polynomials:

  • $x^2-z^2+2yz-2xy$,
  • $z^2-y^2+2xy-2xz$,
  • $x-y$,
  • $y-z$, and
  • $1$,

each of which is obtained by taking certain partial derivatives starting with $\Delta$. It turns out that every polynomial in three variables can be decomposed uniquely as a linear combination of these six polynomials, using symmetric polynomials as the coefficients!

Where do these six polynomials come from? Turn to the next page to find out…

A proof of the existence of finite fields of every possible order

This is our first contributed gemstone! Submitted by user Anon1.

In the following, $p$ denotes a prime. We wish to prove that, for all positive integers $n$, there is a finite field of order $p^{n}$.

Step 1. Restating the problem.

Claim: It suffices to show that, for some power of $p$ (call it $q$), there exists a finite field of order $q^{n}$.

Proof. Suppose there is a field $F$ such that $|F| = q^{n}$. The claim is that the solution set to $x^{p^{n}} = x$ in $F$ is a subfield of order $p^{n}$.

Since $q$ is a power of $p$, we have $$p^{n}-1 | q^{n}-1.$$ Since $q^{n}-1$ is the order of the cyclic group $F^{ \times }$, we know that $F^{ \times }$ has a unique cyclic subgroup of order $p^{n}-1$. This contributes $p^{n}-1$ solutions to $x^{p^{n}} = x$ in $F$. But $0$ is another solution, since it is not an element of $F^{ \times }$. This gives $p^{n}$ solutions, so this gives all solutions.

It remains to show that these solutions form a subfield of $F$. Since they form a finite set, it suffices to show that they are closed under addition and, since they include 0, it suffices to show they are closed under multiplication.

Closure under multiplication is immediate: $x^{p^{n}} = x$ and $y^{p^{n}} = y$ implies $$(xy)^{p^{n}} = x^{p^{n}}y^{p^{n}} = xy.$$

Closure under addition is similarly easy: Since $|F| = q$ and $q$ is a power of $p$, the field $F$ has characteristic $p$. This means that, for all $x,y \in F$, $$(x+y)^{p} = x^{p} + y^{p}.$$ Applying this $n$ times yields $(x+y)^{p^{n}} = x^{p^{n}} + y^{p^{n}}$. If $x^{p^{n}} = x$ and $y^{p^{n}} = y$, then it follows that $(x+y)^{p^{n}} = x + y$. We are done with Step 1. $\square$

Step 2. Reducing to an asymptotic problem.

Claim: There are arbitrarily large $k$ such that there is a finite field of order $q = p^{k}$.

Proof. If $f \in \mathbb{F}_{p}[x]$ is irreducible and of degree $k$, then $\mathbb{F}_{p}[x]/(f)$ is a field of order $p^{k}$. So we need to prove that the degrees of irreducible polynomials in $\mathbb{F}_{p}[x]$ get arbitrarily high. Since $\mathbb{F}_{p}[x]$ has only $(p-1)p^{k}$ polynomials of degree $k$, it suffices to prove that $\mathbb{F}_{p}[x]$ has infinitely many irreducible polynomials.

At this point we use Euclid’s argument: if $f_{1}, \ldots , f_{N}$ are all the irreducibles in $\mathbb{F}_{p}[x]$, then $1 + f_{1} \cdot \ldots \cdot f_{N}$ cannot be factored into irreducibles (and isn’t constant), contradicting the fact that $\mathbb{F}_{p}[x]$ is a unique factorization domain. Therefore $\mathbb{F}_{p}[x]$ has infinitely many irreducibles and we are done. $\square$

Step 3. Counting the irreducible polynomials of degree $d$ over various $\mathbb{F}_q$.

Claim: If $q$ is a power of $p$ for which there is a finite field of order $q$ (here denoted $\mathbb{F}_{q}$), then the number of irreducible monic polynomials of degree $d$ over $\mathbb{F}_{q}$ is a polynomial (call it $I_{d}(q)$) depending only on $d$, evaluated at $q$.

Proof. This proceeds by strong induction on $d$. The base case is $d = 1$: any linear polynomial is irreducible, and there are $q$ monic linear polynomials over $\mathbb{F}_{q}$.

Now suppose $d > 1$. There are $q^{d}$ monic degree $d$ polynomials over $\mathbb{F}_{q}$. To count how many are irreducible, we remove the ones which are reducible: If a monic polynomial is reducible, it can be written uniquely as a product of monic irreducibles of lower degree. These degrees add up to $d$, so they give a partition $\lambda$ of $d$ which can be any partition except $(d)$ itself.

Specifically, for each such partition $\lambda$ suppose $a_{i}$ is the number of parts of size $i$. Then
$$I_{d}(q) = q^{d} – \sum_{ \lambda \vdash d, \lambda \neq (d)}{ \prod_{i=1}^{d} \binom{I_{i}(q) + a_{i} – 1}{a_{i}} },$$ which is a polynomial in $q$ by the strong induction hypothesis. $\square$

Now, it suffices to show that for any given $n$, $I_n(q)>0$ for sufficiently large $q$. For, by step 2, this will give us a power of $p$ called $q$ for which $\mathbb{F}_q$ exists and such that there is a monic irreducible polynomial over $\mathbb{F}_q$ of degree $n$. By step 1 we are then done.

Since $I_{n}$ is a polynomial, this amounts to showing that $I_{n}$ has a positive leading coefficient for all $n$. This we do now:

Step 4. Showing that the counting polynomial $I_n$ has positive leading coefficient.

Claim: For all $n$, $I_{n}$ has degree $n$ and leading coefficient $1/n$.

Proof. Again we proceed by strong induction on $n$. When $n = 1$, we have $I_{n}(q) = q$ and we are done. So suppose $n>1$. Then as in step 3, we have
$$I_{n}(q) = q^{n} – \sum_{ \lambda \vdash n, \lambda \neq (n) }{ \Pi_{i=1}^{n} \binom{I_{i}(q) + a_{i} – 1}{a_{i}} }.$$
Since $a_{n} = 0$, we can restrict the products to only $i \lt n$. Then the degree of each of these products is $\sum_{i=1}^{n} i\cdot a_i=n$ by the induction hypothesis, and so $I_n(q)$ has degree at most $n$. Moreover, assuming by the induction hypothesis that the coefficient of $q^i$ in $I_i(q)$ is $1/i$, we have that the coefficient of $q^{i\cdot a_i}$ in $\binom{I_i(q)+a_i-1}{a_i}$ is $$\frac{1}{a_i!\cdot i^{a_i}}.$$

Finally, it follows that the coefficient of $q^n$ in $I_n(q)$ is $$1-\sum_{ \lambda \vdash n, \lambda \neq (n) }{ \Pi_{i=1}^{n}{ \frac{1}{a_{i}!i^{a_{i}}}}}.$$ Now, notice that for a given $\lambda$, the product $$z_\lambda=a_1!\cdot 1^{a_1}\cdot a_2!\cdot 2^{a_2}\cdot\cdots\cdot a_n!\cdot n^{a_n}$$ is the well-known demoninator of the formula $$n!/z_\lambda$$ for the size of the conjugacy class of elements of shape $\lambda$ in the symmetric group $S_n$. Thus we have $$\sum_{\lambda\vdash n} \frac{n!}{z_\lambda}=n!,$$ and so $$\sum_{\lambda\vdash n}\frac{1}{z_\lambda}=1.$$ It follows from this and the formula for our coefficient above that the coefficient is simply $$\sum_{\lambda=(n)}\frac{1}{z_\lambda}=1/n,$$ and the claim is proven. $\square$

Schubert Calculus

I recently gave a talk on Schubert calculus in the Student Algebraic Geometry Seminar at UC Berkeley. As a combinatorialist, this is a branch of enumerative geometry that particularly strikes my fancy. I also made a handout for the talk called “Combinatorics for Algebraic Geometers,” and I thought I’d post it here in blog format.

Motivation

Enumerative geometry

In the late 1800’s, Hermann Schubert investigated problems in what is now called enumerative geometry, or more specifically, Schubert calculus. As some examples, where all projective spaces are assumed to be over the complex numbers:

  1. How many lines in $\newcommand{\PP}{\mathbb{P}}
    \newcommand{\CC}{\mathbb{C}}
    \newcommand{\RR}{\mathbb{R}}
    \newcommand{\ZZ}{\mathbb{Z}}
    \DeclareMathOperator{\Gr}{Gr}
    \DeclareMathOperator{\Fl}{Fl}
    \DeclareMathOperator{\GL}{GL}\PP^n$ pass through two given points?
  2. Answer: One, as long as the points are distinct.

  3. How many planes in $\PP^3$ contain a given line $l$ and a given point $P$?
  4. Answer: One, as long as $P\not\in l$.

  5. How many lines in $\PP^3$ intersect four given lines $l_1,l_2,l_3,l_4$?
  6. Answer: Two, as long as the lines are in sufficiently “general” position.

  7. How many $(r-1)$-dimensional subspaces of $\PP^{m-1}$ intersect each of $r\cdot (m-r)$ general subspaces of dimension $m-r-1$ nontrivially?
  8. Answer: $$\frac{(r(m-r))!\cdot (r-1)!\cdot (r-2)!\cdot \cdots\cdot 1!}{(m-1)!\cdot(m-2)!\cdot\cdots\cdot 1!}$$

The first two questions are not hard, but how would we figure out the other two? And what do we mean by “sufficiently general position”?

Schubert’s 19th century solution to problem 3 above would have invoked what he called the “Principle of Conservation of Number,” as follows. Suppose the four lines were arranged so that $l_1$ and $l_2$ intersect at a point $P$, $l_3$ and $l_4$ intersect at $Q$, and none of the other pairs of lines intersect. Then the planes formed by each pair of crossing lines intersect at another line $\alpha$, which necessarily intersects all four lines. The line $\beta$ through $P$ and $Q$ also intersects all four lines, and it is not hard to see that these are the only two in this case.

Schubert would have said that since there are two solutions in this configuration and it is a finite number of solutions, it is true for every configuration of lines for which the number is finite by continuity. Unfortunately, due to degenerate cases involving counting with multiplicity, this led to many errors in computations in harder questions of enumerative geometry. Hilbert’s 15th problem asked to put Schubert’s enumerative methods on a rigorous foundation. This led to the modern-day theory known as Schubert calculus.

Describing moduli spaces

Schubert calculus can also be used to describe intersection properties in simpler ways. As we will see, it will allow us to easily prove statements such as:

The variety of all lines in $\PP^4$ that are both contained in a general $3$-dimensional hyperplane $S$ and intersect a general line $l$ nontrivially is isomorphic to the variety of all lines in $S$ passing through a specific point in that hyperplane.

(Here, the specific point in the hyperplane is the intersection of $S$ and $L$.)

A better way: Carlitz’s bijection

In my previous post on the major index, I mentioned the statistics $\DeclareMathOperator{\inv}{inv}
\DeclareMathOperator{\maj}{maj}
\DeclareMathOperator{\code}{code} \inv$ and $\maj$ on permutations, and the fact that they were equidistributed. I have since learned of a more enlightening way of proving this result, due to Carlitz.

Let’s start by recalling the definitions of $\inv$ and $\maj$. An inversion of a permutation of the numbers $1,2,\ldots,n$ is a pair of numbers which appear “out of order” in the sequence, that is, with the larger number appearing first. For instance, for $n=4$, the permutation $3,1,4,2$ has three inversions: $(3,1)$, $(3,2)$, and $(4,2)$. We write $$\inv(3142)=3.$$

For the major index, define a descent of a permutation to be an index $i$ for which the $i$th number is greater than the $(i+1)$st. The major index is defined to be the sum of the descents. For instance, the permutation $3,1,4,2$ has two descents, in positions $1$ and $3$, so the major index is $4$. We write $$\maj(3142)=4.$$

It turns out that
\begin{eqnarray}
\sum_{w\in S_n} q^{\inv(w)} &=& (1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}) \label{eqn} \\
&=& \sum_{w\in S_n}q^{\maj(w)},
\end{eqnarray} so we say that $\inv$ and $\maj$ are equidistributed, that is, the number of permutations having $\maj w=k$ is equal to the number having $\inv w=k$ for all $k$.

Carlitz’s proof, which is explained beautifully on pages 5 and 6 of this paper of Mark Skandera, cleanly shows each equality directly, without first requiring any algebraic manipulation. (Carlitz’s original paper can be found here, and it seems that by creating a MyJSTOR account, you can now access this article online for free! Thumbs up to this recent step towards open access.)

The main idea of Carlitz’s bijection is to consider an intermediate combinatorial object called a permutation code. Given a permutation $w=w_1, w_2,\ldots,w_n$ of $\{1,2,\ldots,n\}$, define $\code(w)=a_1a_2\cdots a_n$ where $a_i$ is the number of entries of $w$ to the right of $w_i$ which are less than $w_i$. For instance, we have $$\code(41532)=30210,$$ since the $4$ is part of an inversion with three entries to its right, the $1$ with none, and so on. It is not hard to see that each permutation has a unique code, and a sequence is the code of some permutation if and only if its last entry is (at most) $0$, its second-to-last entry is at most $1$, its third-to-last entry is at most $2$, and so on.

To formalize this, we define a code of length $n$ to be a sequence of $n$ nonnegative integers such that the $i$th entry is no larger than $n-i$. There are clearly $n!$ codes (one choice for the last digit, two choices for the second-to-last, and so on), and furthermore the $q$-series that counts codes by the sum of their entries is given by
$$\sum_{c\text{ code}\\ |c|=n} q^{\sum c_i}=(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}).$$
The bijection $\code$ described above sends a permutation $w$ with $\inv w=k$ to a code having sum $k$, and so this immediately shows that
\begin{eqnarray*}
\sum_{w\in S_n} q^{\inv w}&=&\sum_{c\text{ code} |c|=n} q^{\sum c_i} \\
&=&(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}).
\end{eqnarray*}

So, it now suffices to find a bijection between codes of length $n$ and permutations of $n$, which I’ll call $\phi$, which sends the sum of the code to $\maj$ of the permutation. Given a code of length $n$, we read it backwards, and at each step insert the numbers $n, n-1,\ldots$ successively to form a permutation, so that at each step, the major index of the resulting word increases by precisely the code number read at each step. For example, consider the code $30210$ from above. We read it backwards, so the first entry is $0$, and we first place a $5$ in our permutation. So our first step yields the sequence: $$5$$ The next code entry from the right is $1$, so we must insert the $4$ in a way that increases the major index by exactly one. We see that the only way to do this is to insert it to the right of the $5$: $$5,4$$ The next code entry is $2$, so we insert the $3$ to the right of the $4$, as this is the only way to increase the major index by $2$: $$5,4,3$$ The next entry is $0$, and the only place to insert a $2$ so that the major index does not change is before the $3$. So our sequence is now:
$$5,4,2,3$$ Finally, we insert the $1$ so as to increase the major index by $3$:
$$5,4,2,1,3$$

So $\phi(30210)=54213.$ It is not hard to show that there is always exactly one valid insertion of the next number at each step in the process, and hence $\phi$ is a bijection, with the property that $\sum(c)=\maj(\phi(c))$ for any code $c$ of length $n$. This implies immediately that
\begin{eqnarray*}
\sum_{w\in S_n} q^{\maj w}&=&\sum_{c\text{ code} |c|=n} q^{\sum c_i} \\
&=&(1)(1+q)(1+q+q^2)\cdots(1+q+q^2+\cdots+q^{n-1}),
\end{eqnarray*} and the desired equation follows.

Yet another definition of the Schur functions

I continue to be amazed at the multitude of different contexts in which the Schur functions naturally appear.

In a previous post, I defined the Schur symmetric functions combinatorially, via the formula $$s_\lambda=\sum_{|\mu|=|\lambda|}K_{\lambda\mu} m_\mu$$ where $K_{\lambda\mu}$ is the number of semistandard Young tableaux of shape $\lambda$ and content $\mu$ and $m_\mu$ is the monomial symmetric function of shape $\lambda$. I also defined them as the ratio $$s_\lambda=\frac{a_{\lambda+\delta}}{a_\lambda}$$ where $a_\lambda$ is the elementary antisymmetric function.

And, in another post, I pointed out that the Frobenius map sends the irreducible characters of the symmetric group $S_n$ to the Schur functions $s_\lambda$. This can be taken as a definition of the Schur functions:

The Schur functions $s_\lambda$, for $|\lambda|=n$, are the images of the irreducible representations of $S_n$ under the Frobenius map.

Today, I’d like to introduce an equally natural representation-theoretic definition of the Schur functions:

The Schur functions $s_\lambda$, for $\lambda$ having at most $n$ parts, are the characters of the polynomial representations of the general linear group $GL_n(\mathbb{C})$.

I recently read about this in Fulton’s book on Young Tableaux, while preparing to give a talk on symmetric function theory in this term’s new seminar on Macdonald polynomials. Here is a summary of the basic ideas (turn to page 2):

Combinatorial species

Just last month, I took my Ph.D. qualifying exam at Berkeley. It was a grueling three-hour oral exam, but from it came a gemstone that simply must be shared!

The very first question I was asked was from my advisor, Mark Haiman: “Enumerate [count] the labeled plane trees on $n$ vertices.”

More precisely, let $n$ be a positive integer, and define a labeled plane tree to be a tree with vertices labeled $1,2,\ldots,n$, along with a cyclic ordering of the branches attached to each vertex. (The name “plane tree” refers to the fact that the orderings of the branches determine an embedding of the tree into Euclidean space up to homotopy equivalence.) For instance, say we had the following trees drawn in the plane:

trees

They are isomorphic as graphs but not as plane trees, because the branches around the vertex $1$ have different clockwise cyclic orderings.

So how do we count these trees?

Let’s get our hands dirty and start counting them by brute force. Letting $p_n$ denote the number of distinct plane trees up to isomorphism, it isn’t hard to find that $p_0=0$, $p_1=1$, $p_2=1$, $p_3=3$, $p_4=20$, and with a bit more work, $p_5=210$. Hmm. No obvious pattern jumps out immediately, and the computation gets difficult very quickly as $n$ increases.

Perhaps we can find a generating function for $p_n$ instead. And here is where species play a role.

A bridge between two worlds: the Frobenius map

There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group.

What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds.

The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function. To define it, recall that characters are constant across conjugacy classes, and in $S_n$, the conjugacy classes correspond to the partitions of $n$ by associating a permutation $\pi$ with its cycle type $c(\pi)$. For instance, the permutations $\pi=(12)(345)$ and $\sigma=(35)(142)$ both have cycle type $\lambda=(3,2)$. So, any character $\chi$ satisfies $\chi(\pi)=\chi(\sigma)$.

We can now define the Frobenius map $F$. For any character $\chi$ of a representation of $S_n$, define
$$F\left(\chi\right)=\frac{1}{n!}\sum_{\pi\in S_n}\chi(\pi)p_{c(\pi)}$$ where $p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_k}$ is the $\lambda$th power sum symmetric function. (Recall that $p_{\lambda_i}=x_1^{\lambda_i}+x_2^{\lambda_i}+\cdots+x_k^{\lambda_i}$.)

Then, combining the permutations with the same cycle type in the sum, we can rewrite the definition as a sum over partitions of size n:
$$F\left(\chi\right)=\sum_{|\lambda|=n}\frac{1}{z_\lambda}\chi(\lambda)p_{\lambda}$$ for some constants $z_\lambda$. (It’s a fun little combinatorial problem to show that if $\lambda$ has $m_1$ 1’s, $m_2$ 2’s, and so on, then $z_\lambda=1^{m_1}m_1!2^{m_2}m_2!\cdots.$)

As an example, let’s take a look at the character table of $S_3$:
$$
\begin{array}{c|ccc}
& [(1)(2)(3)] & [(12)(3)] & [(123)] \\\hline
\chi^{(3)} & 1 & 1 & 1 \\
\chi^{(1,1,1)} & 1 & -1 & 1 \\
\chi^{(2,1)} & 2 & 0 & -1
\end{array}
$$ Consider the third row, $\chi^{(2,1)}$, and let us work over three variables $x,y,z$. Then the Frobenius map sends $\chi^{(2,1)}$ to
$$F\chi^{(2,1)}=\frac{1}{6}(2p_{(1,1,1)}-2p_3)$$ by our definition above. This simplifies to:
$$\frac{1}{3}((x+y+z)^3-(x^3+y^3+z^3))=x^2y+y^2x+x^2z+z^2x+y^2z+z^2y+2xyz$$ which can be written as $m_{2,1}+2m_{1,1,1}$. Notice that, by the combinatorial definition of the Schur functions, this is precisely the Schur function $s_{2,1}$! In fact:

The Frobenius map sends the irreducible character $\chi^\lambda$ to the Schur function $s_\lambda$ for all $\lambda$.

And therein lies the bridge.

Why is this the case? Read on to find out…

The hidden basis

In the last few posts (see here and here), I’ve been talking about various bases for the symmetric functions: the monomial symmetric functions $m_\lambda$, the elementary symmetric functions $e_\lambda$, the power sum symmetric functions $p_\lambda$, and the homogeneous symmetric functions $h_\lambda$. As some of you aptly pointed out in the comments, there is one more important basis to discuss: the Schur functions!

When I first came across the Schur functions, I had no idea why they were what they were, why every symmetric function can be expressed in terms of them, or why they were useful or interesting. I first saw them defined using a simple, but rather arbitrary-sounding, combinatorial approach:

First, define a semistandard Young tableau (SSYT) to be a way of filling in the squares of a partition diagram (Young diagram) with numbers such that they are nondecreasing across rows and strictly increasing down columns. For instance, the Young diagram of $(5,3,1,1)$ is:

and one possible SSYT of this shape is:

(Fun fact: The plural of tableau is tableaux, pronounced exactly the same as the singular, but with an x.)

Now, given a SSYT $T$ with numbers of size at most $n$, let $\alpha_i$ be the number of $i$`s written in the tableau. Given variables $x_1,\ldots,x_n$, we can define the monomial $x^T=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. Then the Schur function $s_\lambda$ is defined to be the sum of all monomials $x^T$ where $T$ is a SSYT of shape $\lambda$.

For instance, if $\lambda=(2,1)$, then the possible SSYT’s of shape $\lambda$ with numbers of size at most $2$ are:

So the Schur function $s_{(2,1)}$ in $2$ variables $x$ and $y$ is $x^2y+y^2x$.

This combinatorial definition seemed rather out-of-the-blue when I first saw it. Even more astonishing is that the Schur functions have an abundance of nice properties. To name a few:

  • The Schur functions are symmetric. Interchanging any two of the variables results in the same polynomial.
  • The Schur functions form a basis for the symmetric functions, Like the elementary symmetric functions, every symmetric polynomial can be expressed uniquely as a linear combination of $s_\lambda$`s.
  • The Schur functions arise as the characters of the irreducible polynomial representations of the general linear group. This was proven by Isaac Schur and was the first context in which the Schur functions were defined. Here, a polynomial representation is a matrix representation in which the entries are given by polynomials in the entries of the elements of $GL_n$.
  • The transition matrix between the power sum symmetric functions and the Schur functions is precisely the character table of the symmetric group $S_n$. This fact is essential in proving the Murnaghan-Nakayama rule that I mentioned in a previous post.

All of this is quite remarkable – but why is it true? It is not even clear that they are symmetric, let alone a basis for the symmetric functions.

After studying the Schur functions for a few weeks, I realized that while this combinatorial definition is very useful for quickly writing down a given $s_\lambda$, there is an equivalent algebraic definition that is perhaps more natural in terms of understanding its role in symmetric function theory. (Turn to page 2!)

Characters of the symmetric group

Last week, I sat down to compute the character table of $S_5$ for the first time in several years.

My first instinct, following what I learned five years ago, was to use the standard tricks for computing character tables: compute the characters of some easily constructed representations, use the fact that the sum of the squares of the dimensions of the irreducible characters is $|S_5|=120$, and use the orthogonality of the rows and columns of the character table to finish it off.

This, however, is rather tedious for a group as large as $S_5$. But I had recently learned that the irreducible representations of the symmetric group are completely classified, and can be constructed using group actions on standard Young tableaux. Was there a way to use this theory to compute each entry of the character table directly?

It turned out that it was possible to laboriously use this theory to compute the characters one at a time, by finding an explicit matrix representation for each irreducible representation. However, I still had a nagging feeling that there should be a faster way to compute the characters.

I dug through the standard references in search of an answer, and I found it: the Murnaghan-Nakayama Rule (thanks to Enumerative Combinatorics, Vol. 2, by Richard Stanley). This concise and elegant method gives a simple combinatorial way of computing the character table of any symmetric group $S_n$. And it goes like this:

Step 1. The conjugacy classes of $S_n$ are the permutations having a fixed number of cycles of each length, corresponding to a partition of $n$ called the shape of the permutation. For instance, the conjugacy classes for $S_5$ have representative elements $\mathbf{1}, (12), (12)(34), (123), (123)(45), (1234), (12345),$ corresponding to the seven partitions $11111, 2111, 221, 311, 32, 41, 5$ of $5$. Since the characters of a group are constant on its conjugacy classes, we index the columns of our character table by these partitions:

Step 2. There are precisely as many irreducible characters as conjugacy classes, so we can also index the irreducible characters by the partitions of $n$. We represent each partition as a Young diagram and write these down the left of the table:

Step 3: The Murnaghan-Nakayama Rule. We can now calculate the entry in row $\lambda$ and column $\mu$. Let $\mu_1,\mu_2,\ldots$ be the parts of $\mu$ in decreasing order. Drawing $\lambda$ as a Young diagram, define a filling of $\lambda$ with content $\mu$ to be a way of writing a number in each square of $\lambda$ such that the numbers are weakly increasing along each row and column and there are exactly $\mu_i$ squares labeled $i$ for each $i$.

Consider all fillings of $\lambda$ with content $\mu$ such that for each label $i$, the squares labeled $i$ form a connected skew tableaux that does not contain a $2\times 2$ square. (A skew tableau is connected if the graph formed by connecting horizontally or vertically adjacent squares is connected.) Such a tableaux is called a border-strip tableaux, with each label coloring a border strip. For instance, the following is a border-strip tableaux:

For each label in the tableau, define the height of the corresponding border strip to be one less than the number of rows of the border strip. We then weight the tableau by $(-1)^s$ where $s$ is the sum of the heights of the border strips that compose the tableau. (For instance, the heights of the border strips in the above tableaux are $1$, $2$, $1$, and $0$, and so the weight of the tableau is $(-1)^{1+2+1}=1$.) Then, the entry in the character table is simply the sum of these weights.

For example, in the case $n=5$, suppose we are trying to calculate the entry for $\lambda=(3,2)$ and $\mu = 221$. Then the three possible border-strip tableaux are drawn below:

They have weights $1$, $1$, and $-1$, for a total weight of $1+1-1=1$. So, the corresponding entry of the character table is $1$.

That’s all! This gives a quick way to churn out all the entries in the character table:

Using the Murnaghan-Nakayama rule, I was able to compute this table by hand in about 30 minutes. What a gemstone!