What is a $q$-analog? (Part 1)

Hi, I’m Maria and I’m a $q$-analog addict. The theory of $q$-analogs is a little-known gem, and in this series of posts I’ll explain why they’re so awesome and addictive!

So what is a $q$-analog? It is one of those rare mathematical terms whose definition doesn’t really capture what it is about, but let’s start with the definition anyway:

Definition: A $q$-analog of a statement or expression $P$ is a statement or expression $P_q$, depending on $q$, such that setting $q=1$ in $P_q$ results in $P$.

So, for instance, $2q+3q^2$ is a $q$-analog of $5$, because if we plug in $q=1$ we get $5$.

Sometimes, if $P_q$ is not defined at $q=1$, we also say it’s a $q$-analog if $P$ can be recovered by taking the limit as $q$ approaches $1$. For instance, the expression $$\frac{q^5-1}{q-1}$$ is another $q$-analog of $5$ – even though we get division by zero if we plug in $q=1$, we do have a well defined limit that we can calculate, for instance using L’Hospital’s Rule: $$\lim_{q\to 1} \frac{q^5-1}{q-1}=\lim_{q\to 1} \frac{5q^4}{1} = 5.$$

Now of course, there are an unlimited supply of $q$-analogs of the number $5$, but certain $q$-analogs are more important than others. When mathematicians talk about $q$-analogs, they are usually referring to “good” or “useful” $q$-analogs, which doesn’t have a widely accepted standard definition, but which I’ll attempt to define here:

More Accurate Definition: An interesting $q$-analog of a statement or expression $P$ is a statement or expression $P_q$ depending on $q$ such that:

  1. Setting $q=1$ or taking the limit as $q\to 1$ results in $P$,
  2. $P_q$ can be expressed in terms of (possibly infinite) sums or products of rational functions of $q$ over some field,
  3. $P_q$ gives us more refined information about something that $P$ describes, and
  4. $P_q$ has $P$-like properties.

Because of Property 2, most people would agree that $5^q$ is not an interesting $q$-analog of $5$, because usually we’re looking for polynomial-like things in $q$.

cropped-pic-fixed

On the other hand, $\frac{q^5-1}{q-1}$, is an excellent $q$-analog of $5$ for a number of reasons. It certainly satisfies Property 2. It can also be easily generalized to give a $q$-analog of any real number: we can define $$(a)_q=\frac{q^a-1}{q-1},$$ a $q$-analog of the number $a$.

In addition, for positive integers $n$, the expression simplifies:
$$(n)_q=\frac{q^n-1}{q-1}=1+q+q^2+\cdots+q^{n-1}.$$
So for instance, $(5)_q=1+q+q^2+q^3+q^4$, which is a natural $q$-analog of the basic fact that $5=1+1+1+1+1$. The powers of $q$ are just distinguishing each of our “counts” as we count to $5$. This polynomial also captures the fact that $5$ is prime, in a $q$-analog-y way: the polynomial $1+q+q^2+q^3+q^4$ cannot be factored into two smaller-degree polynomials with integer coefficients. So the $q$-number $(5)_q$ also satisfies Properties 3 and 4 above: it gives us more refined information about $5$-ness, by keeping track of the way we count to $5$, and behaves like $5$ in the sense that it can’t be factored into smaller $q$-analogs of integers.

But it doesn’t stop there. Properties 3 and 4 can be satisfied in all sorts of ways, and this $q$-number is even interesting than we might expect. It comes up in finite geometry, analytic number theory, representation theory, and combinatorics. So much awesome mathematics is involved in the study of $q$-analogs that I’ll only cover one aspect of it today: $q$-analogs that appear in geometry over a finite field $\mathbb{F}_q$. Turn to the next page to see them!

Glencoe/McGraw-Hill doesn’t believe this bijection exists

Education is a difficult task, it really is. Teaching takes a few tries to get the hang of. Writing textbooks is even harder. And math is one of those technical fields in which human error is hard to avoid. So usually, when I see a mistake in a math text, it doesn’t bother me much.

But some things just hurt my soul.

tDSX24E - Imgur

No correspondence between the integers and rationals? Yes there is, Example 2! Yes there is!

This horrifying falsehood was stated in the supplementary “Study Guide and Intervention” worksheet for the Glencoe/McGraw-Hill Algebra 2 textbook, and recently pointed out on Reddit. Or at least it was stated in some version of this worksheet. The original file can be found online at various websites, including one download link from Glencoe’s website that shows up on a Google search. There are other versions of the document that don’t contain this example, but this version was almost certainly used in some high schools, as the Reddit thread claims.

Luckily, mathematicians are here to set the record straight. The Wolfram blog published a fantastic post about this error already, with several proofs of the countability of the rationals. There are also several excellent older expositions on this topic, including on the Math Less Traveled and the Division by Zero blogs. I’ll discuss two of my favorites here as well.

But first, let’s talk about what is wrong with the argument in Example 2. The author is correct in stating that listing all the rationals in order would make a one-to-one and onto correspondence between the rationals and integers, and so they try to do so in a random way and failed. At that point, instead of trying a different ordering, they gave up and figured it couldn’t be done! That’s not a proof, or even logically sound (as my students at this year’s Prove it! Math Academy would certainly recognize.)

If one were going to try to prove that a certain set couldn’t be organized into a list, a common tactic would be to use proof by contradiction: assume there was a way to list them, and then show that something goes wrong and you get a contradiction. Of course, this wouldn’t work either in the case of the rationals, because they can be listed.

So let’s discuss a correct solution.

Getting our definitions straight

First, let’s state the precise meaning of a one-to-one and onto correspondence. A function $f$ from a set $A$ to a set $B$, written $f:A\to B$, is an assignment of each element of $a\in A$ to an element $f(a)\in B$. To clear up another misuse of notation in the Glencoe Algebra textbook, the set $A$ is called the domain and $B$ is called the codomain (not the range, as Glencoe would have you think – the range refers to the set of elements of $B$ that are assigned to by the function.) A function is:

  • One-to-one, or injective, if no two elements of $A$ are assigned to the same element of $B$, i.e., if $f(x)=f(y)$ then $x=y$.
  • Onto, or surjective, if every element of $B$ is mapped to, i.e., for all $b\in B$, there exists $a\in A$ such that $f(a)=b$.

For instance, if $\mathbb{Z}$ denotes the set of integers, the function $f:\mathbb{Z}\to \mathbb{Z}$ defined by $f(x)=2x$ is injective, since if $2x=2y$ then $x=y$. However, it is not surjective, since an odd number like $3$ is not equal to $2x$ for any integer $x$.

A function which is both injective and surjective is said to be bijective, and is called a bijection. This is just a shorter way of saying “one-to-one and onto correspondence,” which is wordy and cumbersome.

So, we want to find a bijection $f:\mathbb{Z}\to \mathbb{Q}$, where $\mathbb{Z}$ denotes the integers and $\mathbb{Q}$ the rationals. Notice that we can list all the integers in order: $$0,1,-1,2,-2,3,-3,\ldots$$
and so if we list all the rationals in order, $r_0,r_1,r_2,\ldots$, we can define the function $f$ accordingly by $f(0)=r_0$, $f(1)=r_1$, $f(-1)=r_2$, and so on. The function will be bijective if and only if every rational number appears in the list exactly once.

Next, let’s be precise about the rationals. Recall that the rational numbers are those numbers which can be written as fractions $a/b$ where $a$ and $b$ are integers with $b\neq 0$. In order to assign every rational number a unique representation, let us restrict to the case where $b>0$ and $a$ is any integer such that $\mathrm{gcd}(a,b)=1$. This condition makes $a/b$ into a reduced fraction. So the number $2/-4$ should be written as $-1/2$ in this convention. It follows that we can think of the set of rational numbers as the set $$\mathbb{Q}=\{(a,b) | b>0\text{ and }\mathrm{gcd}(a,b)=1\text{ and }a,b\in \mathbb{Z}\}.$$

Listing the rationals, naively

One way to construct this list is to think of the rationals as ordered pairs $(a,b)$ of integers with $b>0$ and $\mathrm{gcd}(a,b)=1$ as described above. There is an easy way of ordering all pairs of integers – plot them on a coordinate plane, and use a spiral!

spiral-page-001

Now, to list the rationals, follow the spiral from $(0,0)$ outwards. Each time we reach an ordered pair of integers, say $(a,b)$, write it down if $b>0$ and $\mathrm{gcd}(a,b)=1$, and otherwise skip it and move on. (These are the green dots above.) This process guarantees that we list all the rationals exactly once.

More elegant methods

There are many other elegant enumerations of the rationals, and one particularly nice one is due to Calkin and Wilf. They construct a binary tree in which each rational number appears exactly once, as shown below.

tree-page-001

The tree is constructed as follows: start with $1/1$ at the top, and for each node $a/b$ in the tree, assign it the left and right children $a/(a+b)$ and $(a+b)/b$ respectively. This tree turns out to contain every positive rational number exactly once. It also has an incredible property: if we read the entries of each row of the tree successively from left to right, the denominator of each entry will match the numerator of the next entry, giving us a sequence of numerators/denominators:
$$1,1,2,1,3,2,3,1,4,3,5,2,5,3,4,1,\ldots$$ such that the consecutive quotients give us a list of the positive rationals.

This makes me wonder is whether there is a way of listing the integers such that (a) every integer occurs exactly once in the sequence, and (b) the consecutive quotients give all rationals exactly once (allowing the consecutive integers to have common factors greater than one). Thoughts?

The r-major index

In this post and this one too, we discussed the inv and maj statistics on words, and two different proofs of their equidistribution. In fact, there is an even more unifying picture behind these statistics: they are simply two different instances of an entire parameterized family of statistics, called $r\text{-maj}$, all of which are equidistributed!

Rawlings defined an $r$-inversion of a permutation $\pi$ to be a pair of entries $(\pi_i,\pi_j)$ with $i\lt j$ and $$0\lt \pi_i-\pi_j\lt r.$$ For instance, $21534$ has three total inversions, $(2,1)$, $(5,4)$, and $(5,3)$, but only the first two have $\pi_i-\pi_j<2$, so it has two $2$-inversions. He also defined an $r$-descent to be an index $i$ for which $$\pi_i\ge \pi_{i+1}+r,$$ so that $21534$ has only position $3$ as a $2$-descent.

Finally, he defines the $r\text{-maj}$ of a permutation to be $$r\text{-}\mathrm{maj}(\pi)=\left(\sum_{\pi_i\ge \pi_{i+1}+r}i\right)+\#r\text{-}\mathrm{inversions}.$$ Thus $2\text{-}\mathrm{maj}(21534)=3+2=5$. Notice that $1\text{-maj}$ is the usual major index, and $n\text{-maj}$ is the inv statistic!

Rawling’s result is that these statistics all have the same distribution: for any $r,s\ge 1$, the number of permutations of $\{1,2,\ldots,n\}$ having $r\text{-maj}$ value $k$ is the same as the number of them having $s\text{-maj}$ value $k$ for any $k$. More succinctly, $$\sum_{\pi\in S_n} q^{r\text{-maj}(\pi)}=\sum_{\pi\in S_n} q^{s\text{-maj}(\pi)}.$$

A colleague of mine mentioned this result and challenged me to prove it without reading the proof first, so here goes. I challenge the reader to do the same before turning to the next page. Good luck!

Deriving the discriminant of a cubic polynomial through analytic geometric means

This is a contributed gemstone, written by Sushant Vijayan. Enjoy!

Consider a generic cubic equation $$at^3+bt^2+ct+d=0,$$ where $a,b,c,d$ are real numbers and $a \neq 0$. Now we can transform this equation into a depressed cubic equation, i.e., one with no $t^2$ term, through means of Tschirnhaus Transformation $t=x-\frac{b}{3a}$, followed by dividing through by $a$. The depressed cubic equation is given by $$x^3+px+q=0$$ where $p$ and $q$ are related to $a,b,c,d$ by the relation given here. Setting $p=-m$ and $q=-n$ and rearranging we arrive at $$x^3 =mx+n \hspace{3cm} (1)$$ We will investigate the nature of roots for this equation. We begin with plotting out the graph of $y=x^3$:

y=x3

It is an odd, monotonic, nondecreasing function with an inflection point at $x=0$. The real roots of the equation (1) are the $x$-coordinates of the points of intersection between the straight line $y= mx+n$ and the curve $y=x^3$. It is clear geometrically that however we draw the straight line, as a bare minimum, there would be at least one point of intersection. This corresponds to the fact that all cubic equations with real coefficients have at least one real root.

It is immediately clear that if the slope of the line $m$ is less than $0$, there is only one point of intersection and hence only one real root. On the other hand, the condition $m>0$ is equivalent to demanding that the depressed cubic $y(x)= x^3-mx-n$ has two points of local extrema (which is a necessary, but not sufficient, condition for the existence of three real roots).

Now the possibility of repeated real roots occur when the straight line is a tangent to the curve $y=x^3$. Hence consider the slope of the function $y=x^3$:
$$\frac{dy}{dx}=3x^2$$ Now equating the slopes (note $m\ge 0$) we get the tangent points:
$$3x^2=m$$
$$x=\pm \sqrt{\frac {m}{3} } $$

Equivalently, the tangents are at the two values of $x$ for which
$$|x|=\sqrt{\frac {m}{3} }.$$

The corresponding $y$-intercepts for these tangent straight lines are:
$$n=\pm \frac{2m}{3}\sqrt{\frac{m}{3}}$$
or, in other words,
$$|n|=\frac{2m}{3}\sqrt{\frac{m}{3}}$$

Tangents

Thus for a straight line with a given slope $m\ge 0$ there are only two tangents with  corresponding tangent points and  $y$-intercepts. This would be the case of equation (1) having one real and one real repeated root.

What about the case where the slope is still $m$ but $|n|<\frac{2m}{3}\sqrt{\frac{m}{3}}$? bounded

In this case the straight line is parallel to the two tangent lines (since same slope) and in the region bounded by the two tangent lines. Hence it would necessarily intersect the curve $y=x^3$ at three points. This corresponds to the situation of equation (1) having three real roots.

And the case where $ |n| > \frac{2m}{3}\sqrt{\frac{m}{3}}$  corresponds to the area outside the  bounded region of  the two tangent lines and has only one point of intersection.

Unbounded

Hence the necessary and sufficient condition for three real roots (including repeated roots) is given by:
$$|n| \le \frac{2m}{3}\sqrt{\frac{m}{3}} \hspace{3cm}(2) $$

We note that the condition $m\ge 0$ is subsumed within the above condition (2), for If $m<0$ then the condition (2) cannot be satisfied. We note that condition (2) is not a radical expression. We square on both sides and rearrange to arrive at: $$ \frac{4m^3}{27}-n^2 \ge 0$$ Multiply by 27 and set $\bigtriangleup=4m^3-27n^2 $ to get $$ \bigtriangleup \ge 0 $$ The $\bigtriangleup $ is the discriminant of the cubic and has all the information required to determine the nature of the roots of the depressed cubic given by equation (1). This may then be written in terms of $a,b,c,d$ by inverting the Tschirnhaus transformation. A similar exercise may be carried out for the quartic equation, and a similar but albeit more complicated expression can be derived for the discriminant. It would be very interesting to do the same exercise for the quintic and see where it fails (fail it must, otherwise it would contradict Abel's famous result of insolvability of quintic equations using radical expressions).

The Springer Correspondence, Part III: Hall-Littlewood Polynomials

This is the third post in a series on the Springer correspondence. See Part I and Part II for background.

In this post, we’ll restrict ourselves to the type A setting, in which $\DeclareMathOperator{\GL}{GL}\DeclareMathOperator{\inv}{inv} G=\GL_n(\mathbb{C})$, the Borel $B$ is the subgroup of invertible upper triangular matrices, and $U\subset G$ is the unipotent subvariety. In this setting, the flag variety is isomorphic to $G/B$ or $\mathcal{B}$ where $\mathcal{B}$ is the set of all subgroups conjugate to $B$.

The Hall-Littlewood polynomials

For a given partition $\mu$, the Springer fiber $\mathcal{B}_\mu$ can be thought of as the set of all flags $F$ which are fixed by left multiplication by a unipotent element $u$ of Jordan type $\mu$. In other words, it is the set of complete flags $$F:0=F_0\subset F_1 \subset F_2 \subset \cdots \subset F_n=\mathbb{C}^n$$ where $\dim F_i=i$ and $uF_i=F_i$ for all $i$.

In the last post we saw that there is an action of the Weyl group, in this case the symmetric group $S_n$, on the cohomology rings $H^\ast(\mathcal{B}_\mu)$ of the Springer fibers. We let $R_\mu=H^\ast(\mathcal{B}_\mu)$ denote this ring, and we note that its graded Frobenius characteristic $$\DeclareMathOperator{\Frob}{Frob}\widetilde{H}_\mu(X;t):=\Frob_t(H^\ast(\mathcal{B}_\mu))=\sum_{d\ge 0}t^d \Frob(H^{2d}(\mathcal{B}_\mu))$$ encodes all of the data determining this ring as a graded $S_n$-module. The symmetric functions $\widetilde{H}_\mu(X,t)\in \Lambda_{\mathbb{Q}(t)}(x_1,x_2,\ldots)$ are called the Hall-Littlewood polynomials.

The first thing we might ask about a Hall-Littlewood polynomial $H_\mu$ is: what is its degree as a polynomial in $t$? In other words…

What is the dimension of $\mathcal{B}_\mu$?

The dimension of $\mathcal{B}_\mu$ will tell us the highest possible degree of its cohomology ring, giving us at least an upper bound on the degree of $H_\mu$. To compute the dimension, we will decompose $\mathcal{B}_\mu$ into a disjoint union of subvarieties whose dimensions are easier to compute.

Let’s start with a simple example. If $\mu=(1,1,1,\ldots,1)$ is a single-column shape of size $n$, then $\mathcal{B}_\mu$ is the full flag variety $\mathcal{B}$, since here the unipotent element $1$ is in the conjugacy class of shape $\mu$, and we can interpret $\mathcal{B}_\mu$ as the set of flags fixed by the identity matrix (all flags). As described in Part I, the flag variety can be decomposed into Schubert cells $X_w$ where $w$ ranges over all permutations in $S_n$ and $\dim(X_w)=\inv(w)$. For instance, $X_{45132}$ is the set of flags defined by the initial row spans of a matrix of the form:
$$\left(\begin{array}{ccccc}
0 & 1 & \ast & \ast & \ast \\
1 & 0 & \ast & \ast & \ast \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & \ast & 0 \\
0 & 0 & 0 & 1 & 0 \end{array}\right)$$
because this matrix has its leftmost $1$’s in positions $4,5,1,3,2$ from the right in that order.

Thus the dimension of the flag variety is the maximum of the dimensions of these cells. The number of inversions in $w$ is maximized when $w=w_0=n(n-1)\cdots 2 1$, and so $$\dim(\mathcal{B})=\inv(w_0)=\binom{n}{2}.$$

We claim that in general, $\dim(\mathcal{B}_\mu)=n(\mu)$ where if $\mu^\ast$ denotes the conjugate partition, $n(\mu)=\sum \binom{\mu^\ast_i}{2}$. Another way of defining $n(\mu)$ is as the sum of the entries of the superstandard tableau formed by filling the bottom row of $\mu$ with $0$’s, the next row with $1$’s, and so on:

superstandard1

To show this, notice that since $\mathcal{B}_\mu$ is a subvariety of the full flag variety $\mathcal{B}$, and so $$\mathcal{B}_\mu=\mathcal{B}_\mu\cap \mathcal{B}=\bigsqcup \mathcal{B}_\mu\cap X_{w}.$$ It thus suffices to find the largest possible dimension of the varieties $\mathcal{B}_\mu\cap X_{w}$.

Let $u$ be the standard unipotent element of Jordan type $\mu$. For instance, the matrix below is the standard unipotent matrix of shape $(3,2,2)$.
JordanType
Then the set $\mathcal{B}_\mu\cap X_{w}$ can be defined as the subset of $n\times n$ matrices defining flags in $X_w$ whose partial row spans are fixed by the action of $u$. Note that since the first vector is fixed by $u$, it must be equal to a linear combination of the unit vectors $e_{\mu_1}, e_{\mu_1+\mu_2},\ldots$. So we instantly see that the dimension of $\mathcal{B}_\mu\cap X_w$ is in general less than that of $X_w$.

Now, consider the permutation $$\hat{w}=n,n-\mu_1,n-\mu_1-\mu_2,\ldots,n-1,n-1-\mu_1,n-1-\mu_1-\mu_2,\ldots,\ldots.$$ Then it is not hard to see that the matrices in $X_{\hat{w}}$ whose flags are fixed by $u$ are those with $1$’s in positions according to $\hat{w}$, and with $0$’s in all other positions besides those in row $i$ from the top and column $k-\mu_1-\cdots-\mu_j$ from the right for some $i,j,k$ satisfying $i\le j$ and $\mu^\ast_1+\cdots+\mu^\ast_k< i \le \mu^\ast_1+\cdots+\mu^\ast_{k+1}$. This is a mouthful which is probably better expressed via an example. If $\mu=(3,2,2)$ as above, then $\hat{w}=7426315$, and $\mathcal{B}_\mu\cap X_{7426315}$ is the set of flags determined by the rows of matrices of the form $$\left(\begin{array}{ccccccc} 1 & 0 & 0 & \ast & 0 & \ast & 0 \\ 0 & 0 & 0 & 1 & 0 & \ast & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & \ast & 0 & \ast \\ 0 & 0 & 0 & 0 & 1 & 0 & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{array}\right)$$ The first three rows above correspond to the first column of $\mu$, the next three rows to the second column, and the final row to the last column. Notice that the stars in each such block of rows form a triangular pattern similar to that for $X_{w_0}$, and therefore there are $n(\mu)=\binom{\mu^\ast_1}{2}+\binom{\mu^\ast_2}{2}+\cdots$ stars in the diagram. Thus $\mathcal{B}_\mu\cap X_{\hat{w}}$ an open affine set of dimension $n(\mu)$, and so $\mathcal{B}_\mu$ has dimension at least $n(\mu)$. A bit more fiddling with linear algebra and multiplication by $u$ (try it!) shows that, in fact, for any permutation $w$, the row with a $1$ in the $i$th position contributes at most as many stars in $\mathcal{B}_\mu\cap X_{w}$ as it does in $\mathcal{B}_\mu\cap X_{\hat{w}}$. In other words, all other components $\mathcal{B}_\mu\cap X_{w}$ have dimension at most $n(\mu)$, and so $$\dim\mathcal{B}_\mu=n(\mu).$$

The orthogonality relations

In Lusztig’s survey on character sheaves, he shows that the Hall-Littlewood polynomials (and similar functions for other Lie types) satisfy certain orthogonality and triangularity conditions that determine them completely. To state them in the type A case, we first define $\widetilde{H}_\mu[(1-t)X;t]$ to be the result of plugging in the monomials $x_1,-tx_1,x_2,-tx_2,\ldots$ for $x_1,x_2,\ldots$ in the Hall-Littlewood polynomials. (This is a special kind of plethystic substitution.) Then Lusztig’s work shows that:

  • $\left\langle \widetilde{H}_\mu(X;t),s_\lambda\right\rangle=0$ for any $\lambda<\mu$ in dominance order, and $\langle\widetilde{H}_\mu,s_\mu\rangle=1$
  • $\left\langle \widetilde{H}_\mu[(1-t)X;t],s_\lambda\right\rangle=0$ for any $\lambda>\mu$ in dominance order
  • $\left\langle \widetilde{H}_\mu(X;t),\widetilde{H}_{\lambda}[(1-t)X;t]\right\rangle=0$ whenever $\lambda\neq \mu$.

In all three of the above, the inner product $\langle,\rangle$ is the Hall inner product, which can be defined as the unique inner product for which $$\langle s_\lambda,s_\mu\rangle = \delta_{\lambda\mu}$$ for all $\lambda$ and $\mu$.

Since the Schur functions $s_\lambda$ correspond to the irreducible representations $V_\lambda$ of $S_n$, we can therefore interpret these orthogonality conditions in a representation theoretic manner. The inner product $\left \langle \widetilde{H}_\mu(X;t),s_\lambda \right\rangle$ is the coefficient of $s_\lambda$ in the Schur expansion of $\widetilde{H}_\mu$, and is therefore the Hilbert series of the isotypic component of type $V_\lambda$ in the cohomology ring $R_\mu=H^\ast(\mathcal{B}_\mu)$. Moreover, the seemingly arbitrary substitution $X\mapsto (1-t)X$ actually corresponds to taking tensor products with the exterior powers of the permutation representation $V$ of $S_n$. To be precise: $$\widetilde{H}_\mu[(1-t)X;t]=\sum_{i\ge 0} (-1)^i t^i \Frob_t(R_\mu\otimes \Lambda^i(V)).$$

It turns out that any two of the three conditions above uniquely determine the Hall-Littlewood polynomials, and in fact can be used to calculate them explicitly. On the next page, we will work out an example using the first and third conditions above.

Can you prove it?

As this ingenious post over at Math With Bad Drawings explained so clearly recently, there is a big difference between finding the answer to a math problem and being able to explain, beyond all reasonable doubt, why your answer is correct. It is the difference between solving it and proving it!

While mathematical proof is a huge part of math and science, it is unfortunately somewhat overlooked in the standard US curriculum. Partly for this reason, my family and I started a new math summer camp called Prove it! Math Academy. This year’s program will be a two-week crash course in proofs, using challenging problems and advanced mathematical concepts as examples.

Consider the following diagram which appears on our program website:

ApollonianConcurrency

No explanation, no words, no proof. Just the picture. Enticing!

The first step in proving a fact is to state exactly what it is you are trying to prove. In this case, one might generalize/interpret this diagram as follows. Suppose you have a circle $O$ (the big white one) containing three circles $A$, $B$, and $C$ inside of it that are mutually externally tangent to each other and internally tangent to $O$. (These three circles $A$, $B$, and $C$ would be the large green, large yellow, and small red circle in the diagram above.) Now let $X$, $Y$, and $Z$ be the remaining green, yellow, and red circles respectively: for instance, $X$ is the circle tangent internally to $O$ and externally to $B$ and $C$. Then the lines $AX$, $BY$, and $CZ$ connecting the centers of these circles all meet at one point – they are concurrent.

You can construct this in Geogebra or Asymptote. You can vary the sizes of the circles and test that indeed, these lines always seem to be concurrent. But unlike in science, lots of evidence to support a fact is not good enough in mathematics. We want to prove beyond all reasonable doubt that this is always true, no matter what the sizes of the circles are. In other words, you can solve it, but can you prove it?

A very elegant solution to this problem, which appears on the next page of this post, is due to Michael Kural, one of my former students at MOP. He was intrigued by the image on our program flyer at a recent math tournament, and came up with a proof using only Desargues’ theorem, Monge’s theorem, homothety, and inversion. Read on!

The Springer Correspondence, Part II: The Resolution

This is a continuation of The Springer Correspondence, Part I. Here we will work with unipotent matrices to construct the Springer resolution and the cohomology of its fibers.

Unipotent Matrices and Partitions

A unipotent element of a linear algebraic group $G$ is any element $u\in G$ such that $1-u$ is nilpotent. That is, $u=1+n$ where $n^k=0$ for some $k$.

To get a sense of what unipotent matrices look like, consider the type A situation in which $\DeclareMathOperator{\GL}{GL}\newcommand{\CC}{\mathbb{C}} G=\GL_n(\CC)$. Given a unipotent element $u$, we can conjugate it by some matrix to put it in Jordan normal form. It will look something like this:
$$gug^{-1}=\left(\begin{array}{ccccccc}
\lambda_1 & 1 & & & & & \\
& \lambda_1 & 1 & & & & \\
& & \lambda_1 & & & & \\
& & & \lambda_2 & 1 & & \\
& & & & \lambda_2 & & \\
& & & & & \ddots & \\
& & & & & & \lambda_k
\end{array}\right)$$

It turns out that the matrix above is particularly simple in this case:

The eigenvalues $\lambda_i$ of a unipotent matrix are all $1$.

To see this, suppose $\lambda$ is an eigenvalue of $u$. We have $uv=\lambda v$ for some vector $v$, and so $$(1-u)v=(1-\lambda)v.$$ Since $1-u=n$ is nilpotent, say with $n^k=0$, we have $$(1-u)^kv=(1-\lambda)^kv=0,$$ so $(1-\lambda)^k=0$. Since $\lambda\in\CC$ and $\CC$ is a field, it follows that $\lambda=1$, as claimed.

Therefore, every unipotent matrix is conjugate to a matrix havnig all $1$’s on the diagonal, $0$’s or $1$’s on the off-diagonal, and $0$’s everywhere else. The blocks of $1$’s on the off-diagonal split the matrix into Jordan blocks, which we can order by size from greatest to least. Let the sizes of the Jordan blocks be $\mu_1,\mu_2,\ldots,\mu_k$. Then $\mu=(\mu_1,\ldots,\mu_k)$ is a partition of $n$, and determines the conjugacy class of a given unipotent matrix.

For instance, the partition $\mu=(3,2,2)$ corresponds to the conjugacy class of unipotent matrices with the Jordan canonical form below.

JordanType

This can all be summed up in the following fact:

The unipotent conjugacy classes in $\GL_n$ are in one-to-one correspondence with the partitions of $n$.

Now, I know what you are thinking:

“Maria, if the unipotent conjugacy classes of $\GL_n$ and the irreducible representations of $S_n$ are both indexed by the partitions of $n$, shouldn’t there be some nice geometric construction that relates them directly?”

Indeed there is! The Springer correspondence gives just that – and furthermore relates the unipotent conjgacy classes of any Lie group $G$ to the representations of its Weyl group.

The Springer Resolution

In what follows, let $G$ be a Lie group, and let $U$ be the subvariety of $G$ consisting of all unipotent elements. The variety $U$ is not smooth in general, and to resolve the singularities we construct the variety $\widetilde{U}\subset U\times \mathcal{B}$ by $$\widetilde{U}=\{(u,B):u\in B\}.$$ Recall from the previous post that $\mathcal{B}$ is the variety of all Borel subgroups of $G$ and is isomorphic to the Flag variety $G/B$ for any Borel $B$. If we interpret $\mathcal{B}$ as the Flag variety in the case $G=\GL_n$, we can alternatively define $\widetilde{U}$ as the set of all pairs $(u,F)$ where $F$ is a flag and $u$ fixes $F$, that is, $uF_i=F_i$ for each $i$.

It turns out that $\widetilde{U}$ is smooth and the projection map $$\pi:\widetilde{U}\to U$$ is proper, so it resolves the singularities in $U$. This map is known as the Springer resolution.

The theory gets rather deep at this point, so in what follows I will state the main facts without proof. For full details I refer the interested reader to the exposition in Chapter 3 of Representation Theory and Complex Geometry by Chriss and Ginzburg.

Springer Fibers and Weyl Group Action

For any $x\in U$, define the Springer fiber $\mathcal{B}_x$ to be the fiber $\pi^{-1}(x)$ of the Springer resolution over $x$, that is, the set of all Borel subgroups of $G$ that contain $x$. Now, consider the cohomology ring $H^\ast(\mathcal{B}_x)$ over $\CC$. It turns out that there is an action of the Weyl group $W$ on this cohomology ring, called the Springer action.

There is unfortunately no direct way of defining this action. To get some intuition for where the action comes from, notice that the Springer resolution above can be lifted to the entire group: one can define $\widetilde{G}$ to be the subvariety of $G\times \mathcal{B}$ consisting of all pairs $(g,B)$ such that $g\in B$. Now, let $x$ be a regular semisimple element of $G$.

In the case $G=\GL_n$, a regular semisimple element is simply a diagonalizable element $x$ with $n$ distinct nonzero eigenvalues. If $x$ is of this form, any subspace of $\CC^n$ fixed by $x$ is a direct sum of its (linear) eigenspaces. So, if $V_1,\ldots,V_n$ are the eigenspaces corresponding to the distinct eigenvalues of $x$, any flag fixed by $x$ is of the form $$V_{\sigma(1)}\subset V_{\sigma(1)}\oplus V_{\sigma(2)}\subset \cdots \subset V_{\sigma(1)}\oplus \cdots \oplus V_{\sigma(n)}$$ for some permutation $\sigma$ of $\{1,2,\ldots,n\}$. It follows that $\mathcal{B}_x$ consists of exactly $n!$ flags, and has a natural action of $S_n$ via permuting the eigenspaces $V_i$. Therefore, $S_n$ acts on $\mathcal{B}_x$ and therefore on $H^\ast(\mathcal{B}_x)$.

In general, if $x$ is regular and semisimple, the fiber $\mathcal{B}_x$ is a finite set of size $|W|$ where $W$ is the Weyl group of $G$. The regular semisimple elements form a dense subset of $G$, and one can use this to extend the action to all cohomology rings $H^\ast(\mathcal{B}_x)$ for any $x\in G$. This is the tricky part, and involves many more constructions than fit in a reasonable-length blog post, so again I refer the reader to this awesome book.

The Springer Correspondence

We’re finally ready to state the Springer correspodence. For $x\in G$, let $d(x)$ be the dimension of the Springer fiber $\mathcal{B}_x$.

In the case $G=\GL_n$, the top cohomology groups $H^{d(x)}(\mathcal{B}_x)$ are $S_n$-modules due to the Springer action described above. Notice also that $\mathcal{B}_x$ depends only on the conjugacy class of $x$, so for $x$ in the unipotent conjugacy class with shape $\mu$, we write $\mathcal{B}_\mu$ to denote this Springer fiber, with $d(\mu)$ its dimension. It turns out that these $S_n$-modules are precisely the irreducible representations of $S_n$.

The $S_n$-module $H^{d(\mu)}(\mathcal{B}_\mu)$ is isomoprhic to the irreducible representation $V_\mu$ of $S_n$ corresponding to $\mu$.

And there you have it.

For general Lie groups $G$, the Springer correspondence is not quite as nice; the top cohomology groups $H^d(\mathcal{B}_u)$ (where $u$ is a unipotent conjugacy class) are not in general irreducible $W$-modules. However, all of the irreducible $W$-modules occur exactly once as a summand among the modules $H^d(\mathcal{B}_u)$, and there is a correspondence between the irreducible representations of $W$ and pairs $(u,\xi)$ where $u$ is a unipotent conjugacy class in $G$ and $\xi$ is an irreducible $G$-equivariant local system on $u$.

Hall-Littlewood Polynomials

The fact that the top cohomology groups $H^{d(\mu)}(\mathcal{B}_\mu)$ are so nice naturally raises the question: what about the other cohomology groups? What $S_n$-modules do we get in each degree?

In particular, let $R_\mu=H^\ast(\mathcal{B}_\mu)$. Then $R_\mu$ is a graded $S_n$-module with grading $$R_\mu=\bigoplus (R_\mu)_i=\bigoplus H^i(\mathcal{B}_\mu),$$ and so we can construct the Frobenius series
$$F_t(R_\mu)=\sum_{i=0}^{d(\mu)}F((R_{\mu})_i)t^i$$
where $F$ is the Frobenius map that sends $S_n$-modules to symmetric functions.

The Hall-Littlewood polynomials $\widetilde{H}_\mu(\mathbf{x};t)$ are defined to be the Frobenius characteristics $F_t(R_\mu)$, and are therefore a class of symmetric polynomials in the variables $\mathbf{x}=x_1,x_2,\ldots$ with coefficients in $\mathbb{Q}[t]$. They have incredibly rich combinatorial structure which reveals the decomposition of $R_\mu$ into irreducible $S_n$-modules… structure that I will save for a later post. Stay tuned!

The Springer Correspondence, Part I: The Flag Variety

In prior posts, we’ve seen that the irreducible representations of the symmetric group $S_n$ are in one-to-one correspondence with the partitions of $n$, and the Schur functions give an elegant encoding of their characters as symmetric polynomials. Now we can dive a bit deeper: a geometric construction known as the Springer resolution allows us to obtain all the irreducible representations of $S_n$ geometrically, and as a side bonus give natural graded representations that will allow us to define a $q$-analog of the Schur functions known as the Hall-Littlewood polynomials.

Quite a mouthful of terminology. Let’s start at the beginning.

The Classical Flag Variety

When you think of a flag, you might imagine something like this:

flag

Roughly speaking, a flag on a flagpole consists of:

  • a point (the bulbous part at the top of the pole),
  • a line passing through that point (the pole),
  • a plane passing through that line (the plane containing the flag), and
  • space to put it in.

Mathematically, this is the data of a complete flag in three dimensions. However, higher-dimensional beings would require more complicated flags. So in general, a complete flag in $n$-dimensional space $\mathbb{C}^n$ is a chain of vector spaces of each dimension from $0$ to $n$, each containing the previous:

$$0=V_0\subset V_1 \subset V_2 \subset \cdots \subset V_n=\mathbb{C}^n$$

with $\dim V_i=i$ for all $i$.

(Our higher-dimensional flag-waving imaginary friends are living in a world of complex numbers because $\mathbb{C}$ is algebraically closed and therefore easier to work with. However, one could define the flag variety similarly over any field $k$.)

Variety Structure

Now that we’ve defined our flags, let’s see what happens when we wave them around continuously in space. It turns out we get a smooth algebraic variety!

Indeed, the set of all possible flags in $\mathbb{C}^n$ forms an algebraic variety of dimension $n(n-1)$ (over $\mathbb{R}$), covered by open sets similar to the Schubert cells of the Grassmannian. In particular, given a flag $\{V_i\}_{i=1}^n$, we can choose $n$ vectors $v_1,\ldots,v_n$ such that the span of $v_1,\ldots,v_i$ is $V_i$ for each $i$, and list the vectors $v_i$ as row vectors of an $n\times n$ matrix. We can then perform certain row reduction operations to form a different basis $v_1^\prime,\ldots,v_n^\prime$ that also span the subspaces of the flag, but whose matrix is in the following canonical form: it has $1$’s in a permutation matrix shape, $0$’s to the left and below each $1$, and arbitrary complex numbers in all other entries.

For instance, say we start with the flag in three dimensions generated by the vectors $\langle 0,2,3\rangle$, $\langle 1, 1, 4\rangle$, and $\langle 1, 2, -3\rangle$. The corresponding matrix is $$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right).$$ We start by finding the leftmost nonzero element in the first row and scale that row so that this element is $\newcommand{\PP}{\mathbb{P}}
\newcommand{\CC}{\mathbb{C}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\ZZ}{\mathbb{Z}}
\DeclareMathOperator{\Gr}{Gr}
\DeclareMathOperator{\Fl}{F\ell}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\inv}{inv}1$. Then subtract multiples of this row from the rows below it so that all the entries below that $1$ are $0$. Continue the process on all further rows:

$$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right) \to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 1 & 0 & -6\end{array}\right)\to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 0 & 0 & 1\end{array}\right)$$

It is easy to see that this process does not change the flag formed by the partial row spans, and that any two matrices in canonical form define different flags. So, the flag variety is covered by $n!$ open sets given by choosing a permutation and forming the corresponding canonical form. For instance, one such open set in the $5$-dimensional flag variety is the open set given by all matrices of the form
$$\left(\begin{array}{ccccc}
0 & 1 & \ast & \ast & \ast \\
1 & 0 & \ast & \ast & \ast \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & \ast & 0 \\
0 & 0 & 0 & 1 & 0 \end{array}\right)$$

We call this open set $X_{45132}$ because it corresponds to the permutation matrix formed by placing a $1$ in the $4$th column from the right in the first row, in the $5$th from the right in the second row, and so on. The maximum number of $\ast$’s we can have in such a matrix is when the permutation is $w_0=n(n-1)\cdots 3 2 1$, in which case the dimension of the open set $X_{12\cdots n}$ is $n(n-1)/2$ over $\CC$ — or $n(n-1)$ over $\RR$, since $\CC$ is two-dimensional over $\RR$. In general, the number of $\ast$’s in the set $X_w$ is the inversion number $\inv(w)$, the number of pairs of entries of $w$ which are out of order.

Finally, in order to paste these disjoint open sets together to form a smooth manifold, we consider the closures of the sets $X_w$ as a disjoint union of other $X_w$’s. The partial ordering in which $\overline{X_w}=\sqcup_{v\le w} X_v$ is called the Bruhat order, a famous partial ordering on permutations. (For a nice introduction to Bruhat order, one place to start is Yufei Zhao’s expository paper on the subject.)

Intersection Cohomology

Now suppose we wish to answer incidence questions about our flags: which flags satisfy certain given constraints? As in the case of the Grassmannians, this boils down to understanding how the Schubert cells $X_w$ intersect. This question is equaivalent to studying the cohomology ring of the flag variety $\Fl_n$ over $\mathbb{Z}$, where we consider the Schubert cells as forming a cell complex structure on the flag variety.

The cohomology ring $H^\ast(\Fl_n)$, as it turns out, is the coinvariant ring that we discussed in the last post! For full details I will refer the interested reader to Fulton’s book on Young tableaux. To give the reader the general idea here, the Schubert cell $X_w$ can be thought of as a cohomology class in $H^{2i}(\Fl_n)$ where $i=\inv(w)$. We call this cohomology class $\sigma_w$, and note that for the transpositions $s_i$ formed by swapping $i$ and $i+1$, we have $\sigma_{s_i}\in H^2(\Fl_n)$. It turns out that setting $x_i=\sigma_i-\sigma_{i+1}$ for $i\le n-1$ and $x_n=-\sigma_{s_{n-1}}$ gives a set of generators for the cohomology ring, and in fact $$H^\ast(\Fl_n)=\mathbb{Z}[x_1,\ldots,x_n]/(e_1,\ldots,e_n)$$ where $e_1,\ldots,e_n$ are the elementary symmetric polynomials in $x_1,\ldots,x_n$.

Digging deeper: The isotypic components

In last week’s post, we made use of the coinvariant ring $$\mathbb{C}[x_1,\ldots,x_n]/I$$ where $I=(p_1,\ldots,p_n)$ is the ideal generated by the positive-degree homogeneous $S_n$-invariants (symmetric polynomials). We saw that this was an $S_n$-module with Hilbert series $(n)_q!$, and claimed that it was the regular representation.

Let’s see why that is, and see if we can understand where the irreducible components occur.

More precisely, our goal is to understand the series $$\sum_{d} H_{\chi^\mu}(d)q^d$$ where $H_{\chi^\mu}(d)$ is the number of copies of the $\mu$th irreducible representation of $S_n$ occurring in the $d$th degree component of $\mathbb{C}[x_1,\ldots,x_n]/I$. In Stanley’s paper on invariants of finite groups, he states without proof the answer as the following “unpublished result of Lusztig”:

Let $G$ be the group of all $m \times m$ permutation matrices, and let $\chi$ be the irreducible character of $G$ corresponding to the partition $\mu$ of $m$. Then $H_{\chi}(n)$ is equal to the number of standard Young tableaux $T$ of shape $\mu$ such that $n$ is equal to the sum of those entries $i$ of $T$ for which $i$ appears in a column to the left of $i+1$.

To prove this, let’s start with the identity we showed last time, using boldface $\mathbf{x}$ to denote $x_1,\ldots,x_n$ as a shorthand:
$$\mathbb{C}[\mathbf{x}]=\Lambda_{\mathbb{C}}(\mathbf{x})\otimes_{\mathbb{C}[S_n]}\mathbb{C}[\mathbf{x}]/I$$

Since $\Lambda_{\mathbb{C}}(\mathbf{x})$, the ring of symmetric functions, consists entirely of copies of the trivial representation of $S_n$, we see that the irreducible components of type $\chi^\mu$ in degree $d$ on the right hand side come from those of that type in $\mathbb{C}[\mathbf{x}]/I$ of degree $d-k$, tensored with the trivial representations in degree $k$ in $\Lambda$, for some $k$. Moreover, there are $p_n(d)$ copies of the trivial representation in the $d$th degree in $\Lambda$ for all $d$, where $p_n(d)$ is the number of partitions of $d$ into parts of size at most $n$. (One can use the elementary or power sum symmetric function bases to see this.) From this we obtain the following series identity:

$$\left(\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d\right)= \left(\sum p_n(d)q^d\right)\cdot \left(\sum H_{\chi^\mu}(d) q^d\right)$$

To simplify the left hand side, we can use the generalized version of Molien’s theorem for isotypic components (see here.) This gives us
$$\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d=\frac{1}{n!}\sum_{\pi\in S_n}\frac{\overline{\chi^\mu}(\pi)}{\prod (1-q^{c_i(\pi)})}$$ where the $c_i(\pi)$’s are the cycle lengths of $\pi$.

(See this post for details on the above simplification in the case of the trivial character. The case of a general $\chi^\mu$ is analogous.)

If we group the permutations $\pi$ in the sum above according to cycle type (i.e. by conjugacy class), and use the fact that characters of $S_n$ are integers and hence $\overline{\chi^\mu}=\chi^\mu$, we have $$\sum \left\langle (\mathbb{C}[\mathbf{x}])_d,\chi^\mu \right\rangle q^d=\frac{1}{n!}\sum_{\lambda\vdash n} \frac{n!}{z_\lambda}\frac{\chi^\mu(\lambda)}{\prod_i (1-q^{\lambda_i})}.$$ Here $z_\lambda$ are the numbers such that $n!/z_\lambda$ is the size of the conjugacy class corresponding to the partition $\lambda$. It is not hard to see that this simplifies to a specialization of the power sum symmetric functions: $$\sum \frac{\chi^\mu(\lambda)}{z_\lambda} p_\lambda(1,q,q^2,\ldots)$$

Finally, by the representation-theoretic definition of the Schur functions, we see that this is simply $$s_\mu(1,q,q^2,\ldots).$$

Substituting for the left hand side of our original equation, we now have $$s_\lambda(1,q,q^2,\ldots)=\left(\sum p_n(d) q^d\right)\cdot \left(\sum H_{\chi^\mu}(d) q^d\right).$$ We can simplify this further by using the series identity $$\sum p_n(d) q^d=\frac{1}{(1-q)(1-q^2)\cdots(1-q^n)}.$$ In addition, there is a well-known identity (see also Enumerative Combinatorics vol. 2, Proposition 7.19.11) $$s_\mu(1,q,q^2,\ldots)=\frac{\sum_T q^{\operatorname{maj} T}}{(1-q)(1-q^2)\cdots (1-q^n)}$$ where the sum ranges over all standard Young tableaux $T$ of shape $\mu$, and where $\operatorname{maj} T$ denotes the sum of all entries $i$ of $T$ that occur in a higher row than $i+1$ (written in English notation).

This does it: putting everything together and solving for $\sum H_{\chi^\mu}(d) q^d$, we obtain $$\sum H_{\chi^\mu}(d) q^d=\sum_{T}q^{\operatorname{maj} T},$$ which is just about equivalent to Lusztig’s claim. (The only difference is whether we are looking at the rows or the columns that $i$ and $i+1$ are in. There must have been a typo, because the two are not the same $q$-series for the shape $(3,1)$. Replacing “column to the left of” by “row above” or replacing $\mu$ by its conjugate would fix the theorem statement above.)

One final consequence of the formulas above is that it is easy to deduce that the ring of coinvariants, $\mathbb{C}[\mathbf{x}]/I$, is isomorphic to the regular representation of $S_n$. Indeed, setting $q=1$ we see that the total number of copies of the irreducible representation corresponding to $\mu$ is equal to the number of standard Young tableaux of shape $\mu$, giving us the regular representation.

Acknowledgments: The above techniques were shown to me by Vic Reiner at a recent conference. Thanks also to Federico Castillo for many discussions about the ring of coinvariants.

Writing polynomials in terms of invariants and coinvariants

There is a fun little fact regarding polynomials in two variables $x$ and $y$:

Any two-variable polynomial $f(x,y)$ can be uniquely written as a sum of a symmetric polynomial and an antisymmetric polynomial.

(To be more precise, this is true for polynomials over any field of characteristic not equal to $2$. For simplicity, in what follows we will assume that our polynomials have coefficients in $\mathbb{C}$.)

Recall that a polynomial $g$ is symmetric if it does not change upon permuting its variables. In this case, with two variables, $g(x,y)=g(y,x)$. It is antisymmetric if swapping any two of the variables negates it, in this case $g(x,y)=-g(y,x)$.

It is not hard to prove the fact above. To show existence of the decomposition, set $g(x,y)=\frac{f(x,y)+f(y,x)}{2}$ and $h(x,y)=\frac{f(x,y)-f(y,x)}{2}$. Then $$f(x,y)=g(x,y)+h(x,y),$$ and $g$ is symmetric while $h$ is antisymmetric. For instance, if $f(x,y)=x^2$, then we can write $$x^2=\frac{x^2+y^2}{2}+\frac{x^2-y^2}{2}.$$

For uniqueness, suppose $f(x,y)=g_0(x,y)+h_0(x,y)$ where $g_0$ is symmetric and $h_0$ is antisymmetric. Then $g_0+h_0=g+h$, and so $$g_0-g=h-h_0.$$ The left hand side of this equation is symmetric and the right hand side is antisymmetric, and so both sides must be identically zero. This implies that $g_0=g$ and $h_0=h$, so the unique decomposition is $f=g+h$. QED.

This got me thinking…

Is there an analogous decomposition for polynomials in three variables? Or any number of variables?

The above decomposition doesn’t make sense in three variables, but perhaps every polynomial in $x$, $y$, and $z$ can be written uniquely as a sum of a symmetric, antisymmetric, and… some other particular type(s) of polynomials.

Indeed, it can be generalized in the following sense. Notice that any antisymmetric polynomial in two variables is divisible by $x-y$, since setting $x=y$ gives us $f(x,x)=-f(x,x)=0$. Moreover, dividing by $x-y$ gives a symmetric polynomial: if $$h(x,y)=p(x,y)\cdot(x-y)$$ is antisymmetric, then $p(x,y)\cdot (x-y)=-p(y,x)\cdot(y-x)$, and so $p(x,y)=p(y,x)$.

Thus any antisymmetric polynomial $h$ is equal to $x-y$ times a symmetric polynomial, and so we can restate our fact above in the following way:

Any two variable polynomial $f(x,y)$ can be written uniquely as a linear combination of $1$ and $x-y$, using symmetric polynomials as the coefficients.

For instance, $f(x,y)=x^2$ can be written as $$x^2=\left(\frac{x^2+y^2}{2}\right)\cdot 1+\left(\frac{x+y}{2}\right)\cdot (x-y).$$

Now, to generalize this to three variables, in place of $x-y$, we consider the polynomial $$\Delta=(x-y)(x-z)(y-z).$$ Also consider the five polynomials:

  • $x^2-z^2+2yz-2xy$,
  • $z^2-y^2+2xy-2xz$,
  • $x-y$,
  • $y-z$, and
  • $1$,

each of which is obtained by taking certain partial derivatives starting with $\Delta$. It turns out that every polynomial in three variables can be decomposed uniquely as a linear combination of these six polynomials, using symmetric polynomials as the coefficients!

Where do these six polynomials come from? Turn to the next page to find out…