What do Schubert curves, Young tableaux, and K-theory have in common? (Part I)

In a recent and fantastic collaboration between Jake Levinson and myself, we discovered new links between several different geometric and combinatorial constructions. We’ve weaved them together into a beautiful mathematical story, a story filled with drama and intrigue.

So let’s start in the middle.

Slider puzzles for mathematicians

Those of you who played with little puzzle toys growing up may remember the “15 puzzle”, a $4\times 4$ grid of squares with 15 physical squares and one square missing. A move consisted of sliding a square into the empty square. The French name for this game is “jeu de taquin”, which translates to “the teasing game”.

We can play a similar jeu de taquin game with semistandard Young tableaux. To set up the board, we need a slightly more general definition: a skew shape $\lambda/\mu$ is a diagram of squares formed by subtracting the Young diagram (see this post) of a partition $\mu$ from the (strictly larger) Young diagram of a partition $\lambda$. For instance, if $\lambda=(5,3,3,1)$ and $\mu=(2,1)$, then the skew shape $\lambda/\mu$ consists of the white squares shown below.

SkewShape

A semistandard Young tableau is then a way of filling the squares in such a skew shape with positive integers in such a way that the entries are weakly increasing across rows and strictly increasing down columns:

SkewTableau

Now, an inner jeu de taquin slide consists of choosing an empty square adjacent to two of the numbers, and successively sliding entries inward into the empty square in such a way that the tableau remains semistandard at each step. This is an important rule, and it implies that, once we choose our inner corner, there is a unique choice between the squares east and south of the empty square at each step; only one can be slid to preserve the semistandard property. An example of an inwards jeu de taquin slide is shown (on repeat) in the animation below:

JDTSlide

Here’s the game: perform a sequence of successive jeu de taquin slides until there are no empty inner corners left. What tableaux can you end up with? It turns out that, in fact, it doesn’t matter how you play this game! No matter which inner corner you pick to start the jeu de taquin slide at each step, you will end up with the same tableau in the end, called the rectification of the original tableau.

Rectifications

Since we always end up at the same result, it is sometimes more interesting to ask the question in reverse: can we categorize all skew tableaux that rectify to a given fixed tableau? This question has a nice answer in the case that we fix the rectification to be superstandard, that is, the tableau whose $i$th row is filled with all $i$’s:

Superstandard

It turns out that a semistandard tableau rectifies to a superstandard tableau if and only if it is Littlewood-Richardson, defined as follows. Read the rows from bottom to top, and left to right within a row, to form the reading word. Then the tableau is Littlewood-Richardson if every suffix (i.e. consecutive subword that reaches the end) of the reading word is ballot, which means that it has at least as many $i$’s as $i+1$’s for each $i\ge 1$.

For instance, the Littlewood-Richardson tableau below has reading word 352344123322111, and the suffix 123322111, for instance, has at least as many $1$’s as $2$’s, $2$’s as $3$’s, etc. Littlewood-Richardson tableaux are key to the Littlewood-Richardson rule, which allows us to efficiently compute products of Schur functions.

LRTableau

A convoluted commutator

The operation that Jake and I studied is a sort of commutator of rectification with another operation called “shuffling”. The process is as follows. Start with a Littlewood-Richardson tableau $T$, with one of the corners adjacent to $T$ on the inside marked with an “$\times$”. Call this extra square the “special box”. Then we define $\omega(T)$ to be the result of the following four operations applied to $T$.

  1. Rectification: Treat $\times$ as having value $0$ and rectify the entire skew tableau.
  2. Shuffling: Treat the $\times$ as the empty square to perform an inward jeu de taquin slide. The resulting empty square on the outer corner is the new location of $\times$.
  3. Un-rectification: Treat $\times$ as having value $\infty$ and un-rectify, using the sequence of moves from the rectification step in reverse.
  4. Shuffling back: Treat the $\times$ as the empty square to perform a reverse jeu de taquin slide, to move the $\times$ back to an inner corner.

OldOmega

We can iterate $\omega$ to get a permutation on all pairs $(\times,T)$ of a Littlewood-Richardson tableau $T$ with a special box marked on a chosen inner corner, with total shape $\lambda/\mu$ for some fixed $\lambda$ and $\mu$. As we’ll discuss in the next post, this permutation is related to the monodromy of a certain covering space of $\mathbb{RP}^1$ arising from the study of Schubert curves. But I digress.

One of the main results in our paper provides a new, more efficient algorithm for computing $\omega(T)$. In particular, the first three steps of the algorithm are what we call the “evacuation-shuffle”, and our local rule for evacuation shuffling is as follows:

  1. Phase 1. If the special box does not precede all of the $i$’s in reading order, switch the special box with the nearest $i$ prior to it in reading order. Then increment $i$ by $1$ and repeat this step.

    If, instead, the special box precedes all of the $i$’s in reading order, go to Phase 2.

  2. Phase 2. If the suffix of the reading word starting at the special box has more $i$’s than $i+1$’s, switch the special box with the nearest $i$ after it in reading order whose suffix has the same number of $i$’s as $i+1$’s. Either way, increment $i$ by $1$ and repeat this step until $i$ is larger than any entry of $T$.

So, to get $\omega(T)$, we first follow the Phase 1 and Phase 2 steps, and then we slide the special box back with a simple jeu de taquin slide. We can then iterate $\omega$, and compute an entire $\omega$-orbit, a cycle of its permutation. An example of this is shown below.

algorithm-phases

That’s all for now! In the next post I’ll discuss the beginning of the story: where this operator $\omega$ arises in geometry and why this algorithm is exactly what we need to understand it.

What is a $q$-analog? (Part 2)

This is a continuation of Part 1 of this series of posts on $q$-analogs.

Counting by $q$’s

Another important area in which $q$-analogs come up is in combinatorics. In this context, $q$ is a formal variable, and the $q$-analog is a generating function in $q$, but viewed in a different light than usual generating functions. We think of the $q$-analog of as “$q$-counting” a set of weighted objects, where the weights are given by powers of $q$.

Say you’re trying to count permutations of $1,\ldots,n$, that is, ways of rearranging the numbers $1,\ldots,n$ in a row. There are $n$ ways to choose the first number, and once we choose that there are $n-1$ remaining choices for the second, then $n-2$ for the third, and so on. So there are $n!=n\cdot (n-1)\cdot \cdots \cdot 2\cdot 1$ ways to rearrange the entries. For instance, the $3!=6$ permutations of $1,2,3$ are $123$, $132$, $213$, $231$, $312$, $321$.

Now, say we weight the permutations according to how “mixed up” they are, in the sense of how many pairs of numbers are out of order. An inversion is a pair of entries in the permutation in which the bigger number is to the left of the smaller, and $\mathrm{inv}(\pi)$ denotes the number of inversions of the permutation $\pi$. The table below shows the permutations of 3 along with the number of inversions they contain.

$$\begin{array}{ccc}
p & \mathrm{inv}(p) & q^{\mathrm{inv}(p)}\\\hline
123 & 0 & 1 \\
132 & 1 & q\\
213 & 1 & q\\
231 & 2 & q^2\\
312 & 2 & q^2 \\
321 & 3 & q^3
\end{array}
$$

We weight each permutation $p$ by $q^{\mathrm{inv}(p)}$, and $q$-count by summing these $q$-powers, to form the sum $$\sum_{p\in S_n}q^{\mathrm{inv}(p)}$$ where $S_n$ is the set of all permutations of $1,\ldots,n$. So for $n=3$, the sum is $1+2q+2q^2+q^3$ by our table above.

We now come to an important philosophical distinction between $q$-analogs and generating functions. As a generating function, the sum $1+2q+2q^2+q^3$ is thought of in terms of the sequence of coefficients, $1,2,2,1$. Generatingfunctionologically, we might instead write the sum as $\sum_{i=0}^\infty c_i q^i$ where $c_i$ is the number of permutations of length $n$ with $i$ inversions. But in $q$-analog notation, $\sum_{p\in S_n}q^{\mathrm{inv}(p)}$, we understand that it is not the coefficients but rather the exponents of our summation that we are interested in..

In general, a combinatorial $q$-analog can be defined as a summation of $q$-powers $q^{\mathrm{stat}(p)}$ where $p$ ranges over a certain set of combinatorial objects and $\mathrm{stat}$ is a statistic on these objects. Recall that we defined an “interesting $q$-analog” of an expression $P$ to be an expression $P_q$ such that

  1. Setting $q=1$ or taking the limit as $q\to 1$ results in $P$,
  2. $P_q$ can be expressed in terms of (possibly infinite) sums or products of rational functions of $q$ over some field,
  3. $P_q$ gives us more refined information about something that $P$ describes, and
  4. $P_q$ has $P$-like properties.

Certainly setting $q=1$ in a combinatorial $q$-analog results in the total number of objects, and the $q$-analog gives us more information about the objects than just their total number. It’s also a polynomial in $q$, so it satisfies properties 1, 2, and 3 above. Let’s now see how our $q$-analog $\sum_{p\in S_n}q^{\mathrm{inv}(p)}$, which is a $q$-analog of $n!$, also satisfies property 4.

New-attempt

Notice that $1+2q+2q^2+q^3$ factors as $(1)(1+q)(1+q+q^2)$. Indeed, in general this turns out to be the same $q$-factorial we saw in the last post! That is, $$\sum_{p\in S_n}q^{\mathrm{inv}(p)}=(1)(1+q)(1+q+q^2)\cdots(1+q+\cdots+q^n)=(n)_q!.$$ So it satisfies property 4 by exhibiting a product formula like $n!$ itself. I posted a proof of this fact in this post, but let’s instead prove it by building up the theory of $q$-counting from the ground up.

The multiplication principle in combinatorics is the basic fact that the number of ways of choosing one thing from a set of $m$ things and another from a set of $n$ things is the product $m\cdot n$. But what if the things are weighted?

$q$-Multiplication Principle: Given two weighted sets $A$ and $B$ with $q$-counts $M(q)$ and $N(q)$, the $q$-count of the ways of choosing one element from $A$ and another from $B$ is the product $M(q)N(q)$, where the weight of a pair is the sum of the weights of the elements.

Let’s see how this plays out in the case of $(n)_q!$. If each entry in the permutation is weighted by the number of inversions it forms with smaller entries (to its right), then the first entry can be any of $1,2,\ldots,n$, which contributes a factor of $1+q+q^2+\cdots+q^{n-1}$ to the total. The next entry then can be any of the $n-1$ remaining entries, and since the first entry cannot be the smaller entry of an inversion with the second, this choice contributes a factor of $1+q+q^2+\cdots+q^{n-2}$ to the total by the same argument. Continuing in this fashion we get the $q$-factorial as our $q$-count.

Notice that even the proof was a $q$-analog of the proof that $n!$ is the number of permutations of $1,\ldots,n$, now that we have the $q$-Multiplication Principle.

That’s all for now! In the next post we’ll talk about how to use the $q$-Multiplication Principle to derive a combinatorial interpretation of the $q$-binomial coefficient, and discuss $q$-Catalan numbers and other fun $q$-analogs. Stay tuned!

What is a $q$-analog? (Part 1)

Hi, I’m Maria and I’m a $q$-analog addict. The theory of $q$-analogs is a little-known gem, and in this series of posts I’ll explain why they’re so awesome and addictive!

So what is a $q$-analog? It is one of those rare mathematical terms whose definition doesn’t really capture what it is about, but let’s start with the definition anyway:

Definition: A $q$-analog of a statement or expression $P$ is a statement or expression $P_q$, depending on $q$, such that setting $q=1$ in $P_q$ results in $P$.

So, for instance, $2q+3q^2$ is a $q$-analog of $5$, because if we plug in $q=1$ we get $5$.

Sometimes, if $P_q$ is not defined at $q=1$, we also say it’s a $q$-analog if $P$ can be recovered by taking the limit as $q$ approaches $1$. For instance, the expression $$\frac{q^5-1}{q-1}$$ is another $q$-analog of $5$ – even though we get division by zero if we plug in $q=1$, we do have a well defined limit that we can calculate, for instance using L’Hospital’s Rule: $$\lim_{q\to 1} \frac{q^5-1}{q-1}=\lim_{q\to 1} \frac{5q^4}{1} = 5.$$

Now of course, there are an unlimited supply of $q$-analogs of the number $5$, but certain $q$-analogs are more important than others. When mathematicians talk about $q$-analogs, they are usually referring to “good” or “useful” $q$-analogs, which doesn’t have a widely accepted standard definition, but which I’ll attempt to define here:

More Accurate Definition: An interesting $q$-analog of a statement or expression $P$ is a statement or expression $P_q$ depending on $q$ such that:

  1. Setting $q=1$ or taking the limit as $q\to 1$ results in $P$,
  2. $P_q$ can be expressed in terms of (possibly infinite) sums or products of rational functions of $q$ over some field,
  3. $P_q$ gives us more refined information about something that $P$ describes, and
  4. $P_q$ has $P$-like properties.

Because of Property 2, most people would agree that $5^q$ is not an interesting $q$-analog of $5$, because usually we’re looking for polynomial-like things in $q$.

cropped-pic-fixed

On the other hand, $\frac{q^5-1}{q-1}$, is an excellent $q$-analog of $5$ for a number of reasons. It certainly satisfies Property 2. It can also be easily generalized to give a $q$-analog of any real number: we can define $$(a)_q=\frac{q^a-1}{q-1},$$ a $q$-analog of the number $a$.

In addition, for positive integers $n$, the expression simplifies:
$$(n)_q=\frac{q^n-1}{q-1}=1+q+q^2+\cdots+q^{n-1}.$$
So for instance, $(5)_q=1+q+q^2+q^3+q^4$, which is a natural $q$-analog of the basic fact that $5=1+1+1+1+1$. The powers of $q$ are just distinguishing each of our “counts” as we count to $5$. This polynomial also captures the fact that $5$ is prime, in a $q$-analog-y way: the polynomial $1+q+q^2+q^3+q^4$ cannot be factored into two smaller-degree polynomials with integer coefficients. So the $q$-number $(5)_q$ also satisfies Properties 3 and 4 above: it gives us more refined information about $5$-ness, by keeping track of the way we count to $5$, and behaves like $5$ in the sense that it can’t be factored into smaller $q$-analogs of integers.

But it doesn’t stop there. Properties 3 and 4 can be satisfied in all sorts of ways, and this $q$-number is even interesting than we might expect. It comes up in finite geometry, analytic number theory, representation theory, and combinatorics. So much awesome mathematics is involved in the study of $q$-analogs that I’ll only cover one aspect of it today: $q$-analogs that appear in geometry over a finite field $\mathbb{F}_q$. Turn to the next page to see them!

Glencoe/McGraw-Hill doesn’t believe this bijection exists

Education is a difficult task, it really is. Teaching takes a few tries to get the hang of. Writing textbooks is even harder. And math is one of those technical fields in which human error is hard to avoid. So usually, when I see a mistake in a math text, it doesn’t bother me much.

But some things just hurt my soul.

tDSX24E - Imgur

No correspondence between the integers and rationals? Yes there is, Example 2! Yes there is!

This horrifying falsehood was stated in the supplementary “Study Guide and Intervention” worksheet for the Glencoe/McGraw-Hill Algebra 2 textbook, and recently pointed out on Reddit. Or at least it was stated in some version of this worksheet. The original file can be found online at various websites, including one download link from Glencoe’s website that shows up on a Google search. There are other versions of the document that don’t contain this example, but this version was almost certainly used in some high schools, as the Reddit thread claims.

Luckily, mathematicians are here to set the record straight. The Wolfram blog published a fantastic post about this error already, with several proofs of the countability of the rationals. There are also several excellent older expositions on this topic, including on the Math Less Traveled and the Division by Zero blogs. I’ll discuss two of my favorites here as well.

But first, let’s talk about what is wrong with the argument in Example 2. The author is correct in stating that listing all the rationals in order would make a one-to-one and onto correspondence between the rationals and integers, and so they try to do so in a random way and failed. At that point, instead of trying a different ordering, they gave up and figured it couldn’t be done! That’s not a proof, or even logically sound (as my students at this year’s Prove it! Math Academy would certainly recognize.)

If one were going to try to prove that a certain set couldn’t be organized into a list, a common tactic would be to use proof by contradiction: assume there was a way to list them, and then show that something goes wrong and you get a contradiction. Of course, this wouldn’t work either in the case of the rationals, because they can be listed.

So let’s discuss a correct solution.

Getting our definitions straight

First, let’s state the precise meaning of a one-to-one and onto correspondence. A function $f$ from a set $A$ to a set $B$, written $f:A\to B$, is an assignment of each element of $a\in A$ to an element $f(a)\in B$. To clear up another misuse of notation in the Glencoe Algebra textbook, the set $A$ is called the domain and $B$ is called the codomain (not the range, as Glencoe would have you think – the range refers to the set of elements of $B$ that are assigned to by the function.) A function is:

  • One-to-one, or injective, if no two elements of $A$ are assigned to the same element of $B$, i.e., if $f(x)=f(y)$ then $x=y$.
  • Onto, or surjective, if every element of $B$ is mapped to, i.e., for all $b\in B$, there exists $a\in A$ such that $f(a)=b$.

For instance, if $\mathbb{Z}$ denotes the set of integers, the function $f:\mathbb{Z}\to \mathbb{Z}$ defined by $f(x)=2x$ is injective, since if $2x=2y$ then $x=y$. However, it is not surjective, since an odd number like $3$ is not equal to $2x$ for any integer $x$.

A function which is both injective and surjective is said to be bijective, and is called a bijection. This is just a shorter way of saying “one-to-one and onto correspondence,” which is wordy and cumbersome.

So, we want to find a bijection $f:\mathbb{Z}\to \mathbb{Q}$, where $\mathbb{Z}$ denotes the integers and $\mathbb{Q}$ the rationals. Notice that we can list all the integers in order: $$0,1,-1,2,-2,3,-3,\ldots$$
and so if we list all the rationals in order, $r_0,r_1,r_2,\ldots$, we can define the function $f$ accordingly by $f(0)=r_0$, $f(1)=r_1$, $f(-1)=r_2$, and so on. The function will be bijective if and only if every rational number appears in the list exactly once.

Next, let’s be precise about the rationals. Recall that the rational numbers are those numbers which can be written as fractions $a/b$ where $a$ and $b$ are integers with $b\neq 0$. In order to assign every rational number a unique representation, let us restrict to the case where $b>0$ and $a$ is any integer such that $\mathrm{gcd}(a,b)=1$. This condition makes $a/b$ into a reduced fraction. So the number $2/-4$ should be written as $-1/2$ in this convention. It follows that we can think of the set of rational numbers as the set $$\mathbb{Q}=\{(a,b) | b>0\text{ and }\mathrm{gcd}(a,b)=1\text{ and }a,b\in \mathbb{Z}\}.$$

Listing the rationals, naively

One way to construct this list is to think of the rationals as ordered pairs $(a,b)$ of integers with $b>0$ and $\mathrm{gcd}(a,b)=1$ as described above. There is an easy way of ordering all pairs of integers – plot them on a coordinate plane, and use a spiral!

spiral-page-001

Now, to list the rationals, follow the spiral from $(0,0)$ outwards. Each time we reach an ordered pair of integers, say $(a,b)$, write it down if $b>0$ and $\mathrm{gcd}(a,b)=1$, and otherwise skip it and move on. (These are the green dots above.) This process guarantees that we list all the rationals exactly once.

More elegant methods

There are many other elegant enumerations of the rationals, and one particularly nice one is due to Calkin and Wilf. They construct a binary tree in which each rational number appears exactly once, as shown below.

tree-page-001

The tree is constructed as follows: start with $1/1$ at the top, and for each node $a/b$ in the tree, assign it the left and right children $a/(a+b)$ and $(a+b)/b$ respectively. This tree turns out to contain every positive rational number exactly once. It also has an incredible property: if we read the entries of each row of the tree successively from left to right, the denominator of each entry will match the numerator of the next entry, giving us a sequence of numerators/denominators:
$$1,1,2,1,3,2,3,1,4,3,5,2,5,3,4,1,\ldots$$ such that the consecutive quotients give us a list of the positive rationals.

This makes me wonder is whether there is a way of listing the integers such that (a) every integer occurs exactly once in the sequence, and (b) the consecutive quotients give all rationals exactly once (allowing the consecutive integers to have common factors greater than one). Thoughts?

The r-major index

In this post and this one too, we discussed the inv and maj statistics on words, and two different proofs of their equidistribution. In fact, there is an even more unifying picture behind these statistics: they are simply two different instances of an entire parameterized family of statistics, called $r\text{-maj}$, all of which are equidistributed!

Rawlings defined an $r$-inversion of a permutation $\pi$ to be a pair of entries $(\pi_i,\pi_j)$ with $i\lt j$ and $$0\lt \pi_i-\pi_j\lt r.$$ For instance, $21534$ has three total inversions, $(2,1)$, $(5,4)$, and $(5,3)$, but only the first two have $\pi_i-\pi_j<2$, so it has two $2$-inversions. He also defined an $r$-descent to be an index $i$ for which $$\pi_i\ge \pi_{i+1}+r,$$ so that $21534$ has only position $3$ as a $2$-descent.

Finally, he defines the $r\text{-maj}$ of a permutation to be $$r\text{-}\mathrm{maj}(\pi)=\left(\sum_{\pi_i\ge \pi_{i+1}+r}i\right)+\#r\text{-}\mathrm{inversions}.$$ Thus $2\text{-}\mathrm{maj}(21534)=3+2=5$. Notice that $1\text{-maj}$ is the usual major index, and $n\text{-maj}$ is the inv statistic!

Rawling’s result is that these statistics all have the same distribution: for any $r,s\ge 1$, the number of permutations of $\{1,2,\ldots,n\}$ having $r\text{-maj}$ value $k$ is the same as the number of them having $s\text{-maj}$ value $k$ for any $k$. More succinctly, $$\sum_{\pi\in S_n} q^{r\text{-maj}(\pi)}=\sum_{\pi\in S_n} q^{s\text{-maj}(\pi)}.$$

A colleague of mine mentioned this result and challenged me to prove it without reading the proof first, so here goes. I challenge the reader to do the same before turning to the next page. Good luck!

Deriving the discriminant of a cubic polynomial through analytic geometric means

This is a contributed gemstone, written by Sushant Vijayan. Enjoy!

Consider a generic cubic equation $$at^3+bt^2+ct+d=0,$$ where $a,b,c,d$ are real numbers and $a \neq 0$. Now we can transform this equation into a depressed cubic equation, i.e., one with no $t^2$ term, through means of Tschirnhaus Transformation $t=x-\frac{b}{3a}$, followed by dividing through by $a$. The depressed cubic equation is given by $$x^3+px+q=0$$ where $p$ and $q$ are related to $a,b,c,d$ by the relation given here. Setting $p=-m$ and $q=-n$ and rearranging we arrive at $$x^3 =mx+n \hspace{3cm} (1)$$ We will investigate the nature of roots for this equation. We begin with plotting out the graph of $y=x^3$:

y=x3

It is an odd, monotonic, nondecreasing function with an inflection point at $x=0$. The real roots of the equation (1) are the $x$-coordinates of the points of intersection between the straight line $y= mx+n$ and the curve $y=x^3$. It is clear geometrically that however we draw the straight line, as a bare minimum, there would be at least one point of intersection. This corresponds to the fact that all cubic equations with real coefficients have at least one real root.

It is immediately clear that if the slope of the line $m$ is less than $0$, there is only one point of intersection and hence only one real root. On the other hand, the condition $m>0$ is equivalent to demanding that the depressed cubic $y(x)= x^3-mx-n$ has two points of local extrema (which is a necessary, but not sufficient, condition for the existence of three real roots).

Now the possibility of repeated real roots occur when the straight line is a tangent to the curve $y=x^3$. Hence consider the slope of the function $y=x^3$:
$$\frac{dy}{dx}=3x^2$$ Now equating the slopes (note $m\ge 0$) we get the tangent points:
$$3x^2=m$$
$$x=\pm \sqrt{\frac {m}{3} } $$

Equivalently, the tangents are at the two values of $x$ for which
$$|x|=\sqrt{\frac {m}{3} }.$$

The corresponding $y$-intercepts for these tangent straight lines are:
$$n=\pm \frac{2m}{3}\sqrt{\frac{m}{3}}$$
or, in other words,
$$|n|=\frac{2m}{3}\sqrt{\frac{m}{3}}$$

Tangents

Thus for a straight line with a given slope $m\ge 0$ there are only two tangents with  corresponding tangent points and  $y$-intercepts. This would be the case of equation (1) having one real and one real repeated root.

What about the case where the slope is still $m$ but $|n|<\frac{2m}{3}\sqrt{\frac{m}{3}}$? bounded

In this case the straight line is parallel to the two tangent lines (since same slope) and in the region bounded by the two tangent lines. Hence it would necessarily intersect the curve $y=x^3$ at three points. This corresponds to the situation of equation (1) having three real roots.

And the case where $ |n| > \frac{2m}{3}\sqrt{\frac{m}{3}}$  corresponds to the area outside the  bounded region of  the two tangent lines and has only one point of intersection.

Unbounded

Hence the necessary and sufficient condition for three real roots (including repeated roots) is given by:
$$|n| \le \frac{2m}{3}\sqrt{\frac{m}{3}} \hspace{3cm}(2) $$

We note that the condition $m\ge 0$ is subsumed within the above condition (2), for If $m<0$ then the condition (2) cannot be satisfied. We note that condition (2) is not a radical expression. We square on both sides and rearrange to arrive at: $$ \frac{4m^3}{27}-n^2 \ge 0$$ Multiply by 27 and set $\bigtriangleup=4m^3-27n^2 $ to get $$ \bigtriangleup \ge 0 $$ The $\bigtriangleup $ is the discriminant of the cubic and has all the information required to determine the nature of the roots of the depressed cubic given by equation (1). This may then be written in terms of $a,b,c,d$ by inverting the Tschirnhaus transformation. A similar exercise may be carried out for the quartic equation, and a similar but albeit more complicated expression can be derived for the discriminant. It would be very interesting to do the same exercise for the quintic and see where it fails (fail it must, otherwise it would contradict Abel's famous result of insolvability of quintic equations using radical expressions).

The Springer Correspondence, Part III: Hall-Littlewood Polynomials

This is the third post in a series on the Springer correspondence. See Part I and Part II for background.

In this post, we’ll restrict ourselves to the type A setting, in which $\DeclareMathOperator{\GL}{GL}\DeclareMathOperator{\inv}{inv} G=\GL_n(\mathbb{C})$, the Borel $B$ is the subgroup of invertible upper triangular matrices, and $U\subset G$ is the unipotent subvariety. In this setting, the flag variety is isomorphic to $G/B$ or $\mathcal{B}$ where $\mathcal{B}$ is the set of all subgroups conjugate to $B$.

The Hall-Littlewood polynomials

For a given partition $\mu$, the Springer fiber $\mathcal{B}_\mu$ can be thought of as the set of all flags $F$ which are fixed by left multiplication by a unipotent element $u$ of Jordan type $\mu$. In other words, it is the set of complete flags $$F:0=F_0\subset F_1 \subset F_2 \subset \cdots \subset F_n=\mathbb{C}^n$$ where $\dim F_i=i$ and $uF_i=F_i$ for all $i$.

In the last post we saw that there is an action of the Weyl group, in this case the symmetric group $S_n$, on the cohomology rings $H^\ast(\mathcal{B}_\mu)$ of the Springer fibers. We let $R_\mu=H^\ast(\mathcal{B}_\mu)$ denote this ring, and we note that its graded Frobenius characteristic $$\DeclareMathOperator{\Frob}{Frob}\widetilde{H}_\mu(X;t):=\Frob_t(H^\ast(\mathcal{B}_\mu))=\sum_{d\ge 0}t^d \Frob(H^{2d}(\mathcal{B}_\mu))$$ encodes all of the data determining this ring as a graded $S_n$-module. The symmetric functions $\widetilde{H}_\mu(X,t)\in \Lambda_{\mathbb{Q}(t)}(x_1,x_2,\ldots)$ are called the Hall-Littlewood polynomials.

The first thing we might ask about a Hall-Littlewood polynomial $H_\mu$ is: what is its degree as a polynomial in $t$? In other words…

What is the dimension of $\mathcal{B}_\mu$?

The dimension of $\mathcal{B}_\mu$ will tell us the highest possible degree of its cohomology ring, giving us at least an upper bound on the degree of $H_\mu$. To compute the dimension, we will decompose $\mathcal{B}_\mu$ into a disjoint union of subvarieties whose dimensions are easier to compute.

Let’s start with a simple example. If $\mu=(1,1,1,\ldots,1)$ is a single-column shape of size $n$, then $\mathcal{B}_\mu$ is the full flag variety $\mathcal{B}$, since here the unipotent element $1$ is in the conjugacy class of shape $\mu$, and we can interpret $\mathcal{B}_\mu$ as the set of flags fixed by the identity matrix (all flags). As described in Part I, the flag variety can be decomposed into Schubert cells $X_w$ where $w$ ranges over all permutations in $S_n$ and $\dim(X_w)=\inv(w)$. For instance, $X_{45132}$ is the set of flags defined by the initial row spans of a matrix of the form:
$$\left(\begin{array}{ccccc}
0 & 1 & \ast & \ast & \ast \\
1 & 0 & \ast & \ast & \ast \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & \ast & 0 \\
0 & 0 & 0 & 1 & 0 \end{array}\right)$$
because this matrix has its leftmost $1$’s in positions $4,5,1,3,2$ from the right in that order.

Thus the dimension of the flag variety is the maximum of the dimensions of these cells. The number of inversions in $w$ is maximized when $w=w_0=n(n-1)\cdots 2 1$, and so $$\dim(\mathcal{B})=\inv(w_0)=\binom{n}{2}.$$

We claim that in general, $\dim(\mathcal{B}_\mu)=n(\mu)$ where if $\mu^\ast$ denotes the conjugate partition, $n(\mu)=\sum \binom{\mu^\ast_i}{2}$. Another way of defining $n(\mu)$ is as the sum of the entries of the superstandard tableau formed by filling the bottom row of $\mu$ with $0$’s, the next row with $1$’s, and so on:

superstandard1

To show this, notice that since $\mathcal{B}_\mu$ is a subvariety of the full flag variety $\mathcal{B}$, and so $$\mathcal{B}_\mu=\mathcal{B}_\mu\cap \mathcal{B}=\bigsqcup \mathcal{B}_\mu\cap X_{w}.$$ It thus suffices to find the largest possible dimension of the varieties $\mathcal{B}_\mu\cap X_{w}$.

Let $u$ be the standard unipotent element of Jordan type $\mu$. For instance, the matrix below is the standard unipotent matrix of shape $(3,2,2)$.
JordanType
Then the set $\mathcal{B}_\mu\cap X_{w}$ can be defined as the subset of $n\times n$ matrices defining flags in $X_w$ whose partial row spans are fixed by the action of $u$. Note that since the first vector is fixed by $u$, it must be equal to a linear combination of the unit vectors $e_{\mu_1}, e_{\mu_1+\mu_2},\ldots$. So we instantly see that the dimension of $\mathcal{B}_\mu\cap X_w$ is in general less than that of $X_w$.

Now, consider the permutation $$\hat{w}=n,n-\mu_1,n-\mu_1-\mu_2,\ldots,n-1,n-1-\mu_1,n-1-\mu_1-\mu_2,\ldots,\ldots.$$ Then it is not hard to see that the matrices in $X_{\hat{w}}$ whose flags are fixed by $u$ are those with $1$’s in positions according to $\hat{w}$, and with $0$’s in all other positions besides those in row $i$ from the top and column $k-\mu_1-\cdots-\mu_j$ from the right for some $i,j,k$ satisfying $i\le j$ and $\mu^\ast_1+\cdots+\mu^\ast_k< i \le \mu^\ast_1+\cdots+\mu^\ast_{k+1}$. This is a mouthful which is probably better expressed via an example. If $\mu=(3,2,2)$ as above, then $\hat{w}=7426315$, and $\mathcal{B}_\mu\cap X_{7426315}$ is the set of flags determined by the rows of matrices of the form $$\left(\begin{array}{ccccccc} 1 & 0 & 0 & \ast & 0 & \ast & 0 \\ 0 & 0 & 0 & 1 & 0 & \ast & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & \ast & 0 & \ast \\ 0 & 0 & 0 & 0 & 1 & 0 & \ast \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{array}\right)$$ The first three rows above correspond to the first column of $\mu$, the next three rows to the second column, and the final row to the last column. Notice that the stars in each such block of rows form a triangular pattern similar to that for $X_{w_0}$, and therefore there are $n(\mu)=\binom{\mu^\ast_1}{2}+\binom{\mu^\ast_2}{2}+\cdots$ stars in the diagram. Thus $\mathcal{B}_\mu\cap X_{\hat{w}}$ an open affine set of dimension $n(\mu)$, and so $\mathcal{B}_\mu$ has dimension at least $n(\mu)$. A bit more fiddling with linear algebra and multiplication by $u$ (try it!) shows that, in fact, for any permutation $w$, the row with a $1$ in the $i$th position contributes at most as many stars in $\mathcal{B}_\mu\cap X_{w}$ as it does in $\mathcal{B}_\mu\cap X_{\hat{w}}$. In other words, all other components $\mathcal{B}_\mu\cap X_{w}$ have dimension at most $n(\mu)$, and so $$\dim\mathcal{B}_\mu=n(\mu).$$

The orthogonality relations

In Lusztig’s survey on character sheaves, he shows that the Hall-Littlewood polynomials (and similar functions for other Lie types) satisfy certain orthogonality and triangularity conditions that determine them completely. To state them in the type A case, we first define $\widetilde{H}_\mu[(1-t)X;t]$ to be the result of plugging in the monomials $x_1,-tx_1,x_2,-tx_2,\ldots$ for $x_1,x_2,\ldots$ in the Hall-Littlewood polynomials. (This is a special kind of plethystic substitution.) Then Lusztig’s work shows that:

  • $\left\langle \widetilde{H}_\mu(X;t),s_\lambda\right\rangle=0$ for any $\lambda<\mu$ in dominance order, and $\langle\widetilde{H}_\mu,s_\mu\rangle=1$
  • $\left\langle \widetilde{H}_\mu[(1-t)X;t],s_\lambda\right\rangle=0$ for any $\lambda>\mu$ in dominance order
  • $\left\langle \widetilde{H}_\mu(X;t),\widetilde{H}_{\lambda}[(1-t)X;t]\right\rangle=0$ whenever $\lambda\neq \mu$.

In all three of the above, the inner product $\langle,\rangle$ is the Hall inner product, which can be defined as the unique inner product for which $$\langle s_\lambda,s_\mu\rangle = \delta_{\lambda\mu}$$ for all $\lambda$ and $\mu$.

Since the Schur functions $s_\lambda$ correspond to the irreducible representations $V_\lambda$ of $S_n$, we can therefore interpret these orthogonality conditions in a representation theoretic manner. The inner product $\left \langle \widetilde{H}_\mu(X;t),s_\lambda \right\rangle$ is the coefficient of $s_\lambda$ in the Schur expansion of $\widetilde{H}_\mu$, and is therefore the Hilbert series of the isotypic component of type $V_\lambda$ in the cohomology ring $R_\mu=H^\ast(\mathcal{B}_\mu)$. Moreover, the seemingly arbitrary substitution $X\mapsto (1-t)X$ actually corresponds to taking tensor products with the exterior powers of the permutation representation $V$ of $S_n$. To be precise: $$\widetilde{H}_\mu[(1-t)X;t]=\sum_{i\ge 0} (-1)^i t^i \Frob_t(R_\mu\otimes \Lambda^i(V)).$$

It turns out that any two of the three conditions above uniquely determine the Hall-Littlewood polynomials, and in fact can be used to calculate them explicitly. On the next page, we will work out an example using the first and third conditions above.

Can you prove it?

As this ingenious post over at Math With Bad Drawings explained so clearly recently, there is a big difference between finding the answer to a math problem and being able to explain, beyond all reasonable doubt, why your answer is correct. It is the difference between solving it and proving it!

While mathematical proof is a huge part of math and science, it is unfortunately somewhat overlooked in the standard US curriculum. Partly for this reason, my family and I started a new math summer camp called Prove it! Math Academy. This year’s program will be a two-week crash course in proofs, using challenging problems and advanced mathematical concepts as examples.

Consider the following diagram which appears on our program website:

ApollonianConcurrency

No explanation, no words, no proof. Just the picture. Enticing!

The first step in proving a fact is to state exactly what it is you are trying to prove. In this case, one might generalize/interpret this diagram as follows. Suppose you have a circle $O$ (the big white one) containing three circles $A$, $B$, and $C$ inside of it that are mutually externally tangent to each other and internally tangent to $O$. (These three circles $A$, $B$, and $C$ would be the large green, large yellow, and small red circle in the diagram above.) Now let $X$, $Y$, and $Z$ be the remaining green, yellow, and red circles respectively: for instance, $X$ is the circle tangent internally to $O$ and externally to $B$ and $C$. Then the lines $AX$, $BY$, and $CZ$ connecting the centers of these circles all meet at one point – they are concurrent.

You can construct this in Geogebra or Asymptote. You can vary the sizes of the circles and test that indeed, these lines always seem to be concurrent. But unlike in science, lots of evidence to support a fact is not good enough in mathematics. We want to prove beyond all reasonable doubt that this is always true, no matter what the sizes of the circles are. In other words, you can solve it, but can you prove it?

A very elegant solution to this problem, which appears on the next page of this post, is due to Michael Kural, one of my former students at MOP. He was intrigued by the image on our program flyer at a recent math tournament, and came up with a proof using only Desargues’ theorem, Monge’s theorem, homothety, and inversion. Read on!

The Springer Correspondence, Part II: The Resolution

This is a continuation of The Springer Correspondence, Part I. Here we will work with unipotent matrices to construct the Springer resolution and the cohomology of its fibers.

Unipotent Matrices and Partitions

A unipotent element of a linear algebraic group $G$ is any element $u\in G$ such that $1-u$ is nilpotent. That is, $u=1+n$ where $n^k=0$ for some $k$.

To get a sense of what unipotent matrices look like, consider the type A situation in which $\DeclareMathOperator{\GL}{GL}\newcommand{\CC}{\mathbb{C}} G=\GL_n(\CC)$. Given a unipotent element $u$, we can conjugate it by some matrix to put it in Jordan normal form. It will look something like this:
$$gug^{-1}=\left(\begin{array}{ccccccc}
\lambda_1 & 1 & & & & & \\
& \lambda_1 & 1 & & & & \\
& & \lambda_1 & & & & \\
& & & \lambda_2 & 1 & & \\
& & & & \lambda_2 & & \\
& & & & & \ddots & \\
& & & & & & \lambda_k
\end{array}\right)$$

It turns out that the matrix above is particularly simple in this case:

The eigenvalues $\lambda_i$ of a unipotent matrix are all $1$.

To see this, suppose $\lambda$ is an eigenvalue of $u$. We have $uv=\lambda v$ for some vector $v$, and so $$(1-u)v=(1-\lambda)v.$$ Since $1-u=n$ is nilpotent, say with $n^k=0$, we have $$(1-u)^kv=(1-\lambda)^kv=0,$$ so $(1-\lambda)^k=0$. Since $\lambda\in\CC$ and $\CC$ is a field, it follows that $\lambda=1$, as claimed.

Therefore, every unipotent matrix is conjugate to a matrix havnig all $1$’s on the diagonal, $0$’s or $1$’s on the off-diagonal, and $0$’s everywhere else. The blocks of $1$’s on the off-diagonal split the matrix into Jordan blocks, which we can order by size from greatest to least. Let the sizes of the Jordan blocks be $\mu_1,\mu_2,\ldots,\mu_k$. Then $\mu=(\mu_1,\ldots,\mu_k)$ is a partition of $n$, and determines the conjugacy class of a given unipotent matrix.

For instance, the partition $\mu=(3,2,2)$ corresponds to the conjugacy class of unipotent matrices with the Jordan canonical form below.

JordanType

This can all be summed up in the following fact:

The unipotent conjugacy classes in $\GL_n$ are in one-to-one correspondence with the partitions of $n$.

Now, I know what you are thinking:

“Maria, if the unipotent conjugacy classes of $\GL_n$ and the irreducible representations of $S_n$ are both indexed by the partitions of $n$, shouldn’t there be some nice geometric construction that relates them directly?”

Indeed there is! The Springer correspondence gives just that – and furthermore relates the unipotent conjgacy classes of any Lie group $G$ to the representations of its Weyl group.

The Springer Resolution

In what follows, let $G$ be a Lie group, and let $U$ be the subvariety of $G$ consisting of all unipotent elements. The variety $U$ is not smooth in general, and to resolve the singularities we construct the variety $\widetilde{U}\subset U\times \mathcal{B}$ by $$\widetilde{U}=\{(u,B):u\in B\}.$$ Recall from the previous post that $\mathcal{B}$ is the variety of all Borel subgroups of $G$ and is isomorphic to the Flag variety $G/B$ for any Borel $B$. If we interpret $\mathcal{B}$ as the Flag variety in the case $G=\GL_n$, we can alternatively define $\widetilde{U}$ as the set of all pairs $(u,F)$ where $F$ is a flag and $u$ fixes $F$, that is, $uF_i=F_i$ for each $i$.

It turns out that $\widetilde{U}$ is smooth and the projection map $$\pi:\widetilde{U}\to U$$ is proper, so it resolves the singularities in $U$. This map is known as the Springer resolution.

The theory gets rather deep at this point, so in what follows I will state the main facts without proof. For full details I refer the interested reader to the exposition in Chapter 3 of Representation Theory and Complex Geometry by Chriss and Ginzburg.

Springer Fibers and Weyl Group Action

For any $x\in U$, define the Springer fiber $\mathcal{B}_x$ to be the fiber $\pi^{-1}(x)$ of the Springer resolution over $x$, that is, the set of all Borel subgroups of $G$ that contain $x$. Now, consider the cohomology ring $H^\ast(\mathcal{B}_x)$ over $\CC$. It turns out that there is an action of the Weyl group $W$ on this cohomology ring, called the Springer action.

There is unfortunately no direct way of defining this action. To get some intuition for where the action comes from, notice that the Springer resolution above can be lifted to the entire group: one can define $\widetilde{G}$ to be the subvariety of $G\times \mathcal{B}$ consisting of all pairs $(g,B)$ such that $g\in B$. Now, let $x$ be a regular semisimple element of $G$.

In the case $G=\GL_n$, a regular semisimple element is simply a diagonalizable element $x$ with $n$ distinct nonzero eigenvalues. If $x$ is of this form, any subspace of $\CC^n$ fixed by $x$ is a direct sum of its (linear) eigenspaces. So, if $V_1,\ldots,V_n$ are the eigenspaces corresponding to the distinct eigenvalues of $x$, any flag fixed by $x$ is of the form $$V_{\sigma(1)}\subset V_{\sigma(1)}\oplus V_{\sigma(2)}\subset \cdots \subset V_{\sigma(1)}\oplus \cdots \oplus V_{\sigma(n)}$$ for some permutation $\sigma$ of $\{1,2,\ldots,n\}$. It follows that $\mathcal{B}_x$ consists of exactly $n!$ flags, and has a natural action of $S_n$ via permuting the eigenspaces $V_i$. Therefore, $S_n$ acts on $\mathcal{B}_x$ and therefore on $H^\ast(\mathcal{B}_x)$.

In general, if $x$ is regular and semisimple, the fiber $\mathcal{B}_x$ is a finite set of size $|W|$ where $W$ is the Weyl group of $G$. The regular semisimple elements form a dense subset of $G$, and one can use this to extend the action to all cohomology rings $H^\ast(\mathcal{B}_x)$ for any $x\in G$. This is the tricky part, and involves many more constructions than fit in a reasonable-length blog post, so again I refer the reader to this awesome book.

The Springer Correspondence

We’re finally ready to state the Springer correspodence. For $x\in G$, let $d(x)$ be the dimension of the Springer fiber $\mathcal{B}_x$.

In the case $G=\GL_n$, the top cohomology groups $H^{d(x)}(\mathcal{B}_x)$ are $S_n$-modules due to the Springer action described above. Notice also that $\mathcal{B}_x$ depends only on the conjugacy class of $x$, so for $x$ in the unipotent conjugacy class with shape $\mu$, we write $\mathcal{B}_\mu$ to denote this Springer fiber, with $d(\mu)$ its dimension. It turns out that these $S_n$-modules are precisely the irreducible representations of $S_n$.

The $S_n$-module $H^{d(\mu)}(\mathcal{B}_\mu)$ is isomoprhic to the irreducible representation $V_\mu$ of $S_n$ corresponding to $\mu$.

And there you have it.

For general Lie groups $G$, the Springer correspondence is not quite as nice; the top cohomology groups $H^d(\mathcal{B}_u)$ (where $u$ is a unipotent conjugacy class) are not in general irreducible $W$-modules. However, all of the irreducible $W$-modules occur exactly once as a summand among the modules $H^d(\mathcal{B}_u)$, and there is a correspondence between the irreducible representations of $W$ and pairs $(u,\xi)$ where $u$ is a unipotent conjugacy class in $G$ and $\xi$ is an irreducible $G$-equivariant local system on $u$.

Hall-Littlewood Polynomials

The fact that the top cohomology groups $H^{d(\mu)}(\mathcal{B}_\mu)$ are so nice naturally raises the question: what about the other cohomology groups? What $S_n$-modules do we get in each degree?

In particular, let $R_\mu=H^\ast(\mathcal{B}_\mu)$. Then $R_\mu$ is a graded $S_n$-module with grading $$R_\mu=\bigoplus (R_\mu)_i=\bigoplus H^i(\mathcal{B}_\mu),$$ and so we can construct the Frobenius series
$$F_t(R_\mu)=\sum_{i=0}^{d(\mu)}F((R_{\mu})_i)t^i$$
where $F$ is the Frobenius map that sends $S_n$-modules to symmetric functions.

The Hall-Littlewood polynomials $\widetilde{H}_\mu(\mathbf{x};t)$ are defined to be the Frobenius characteristics $F_t(R_\mu)$, and are therefore a class of symmetric polynomials in the variables $\mathbf{x}=x_1,x_2,\ldots$ with coefficients in $\mathbb{Q}[t]$. They have incredibly rich combinatorial structure which reveals the decomposition of $R_\mu$ into irreducible $S_n$-modules… structure that I will save for a later post. Stay tuned!

The Springer Correspondence, Part I: The Flag Variety

In prior posts, we’ve seen that the irreducible representations of the symmetric group $S_n$ are in one-to-one correspondence with the partitions of $n$, and the Schur functions give an elegant encoding of their characters as symmetric polynomials. Now we can dive a bit deeper: a geometric construction known as the Springer resolution allows us to obtain all the irreducible representations of $S_n$ geometrically, and as a side bonus give natural graded representations that will allow us to define a $q$-analog of the Schur functions known as the Hall-Littlewood polynomials.

Quite a mouthful of terminology. Let’s start at the beginning.

The Classical Flag Variety

When you think of a flag, you might imagine something like this:

flag

Roughly speaking, a flag on a flagpole consists of:

  • a point (the bulbous part at the top of the pole),
  • a line passing through that point (the pole),
  • a plane passing through that line (the plane containing the flag), and
  • space to put it in.

Mathematically, this is the data of a complete flag in three dimensions. However, higher-dimensional beings would require more complicated flags. So in general, a complete flag in $n$-dimensional space $\mathbb{C}^n$ is a chain of vector spaces of each dimension from $0$ to $n$, each containing the previous:

$$0=V_0\subset V_1 \subset V_2 \subset \cdots \subset V_n=\mathbb{C}^n$$

with $\dim V_i=i$ for all $i$.

(Our higher-dimensional flag-waving imaginary friends are living in a world of complex numbers because $\mathbb{C}$ is algebraically closed and therefore easier to work with. However, one could define the flag variety similarly over any field $k$.)

Variety Structure

Now that we’ve defined our flags, let’s see what happens when we wave them around continuously in space. It turns out we get a smooth algebraic variety!

Indeed, the set of all possible flags in $\mathbb{C}^n$ forms an algebraic variety of dimension $n(n-1)$ (over $\mathbb{R}$), covered by open sets similar to the Schubert cells of the Grassmannian. In particular, given a flag $\{V_i\}_{i=1}^n$, we can choose $n$ vectors $v_1,\ldots,v_n$ such that the span of $v_1,\ldots,v_i$ is $V_i$ for each $i$, and list the vectors $v_i$ as row vectors of an $n\times n$ matrix. We can then perform certain row reduction operations to form a different basis $v_1^\prime,\ldots,v_n^\prime$ that also span the subspaces of the flag, but whose matrix is in the following canonical form: it has $1$’s in a permutation matrix shape, $0$’s to the left and below each $1$, and arbitrary complex numbers in all other entries.

For instance, say we start with the flag in three dimensions generated by the vectors $\langle 0,2,3\rangle$, $\langle 1, 1, 4\rangle$, and $\langle 1, 2, -3\rangle$. The corresponding matrix is $$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right).$$ We start by finding the leftmost nonzero element in the first row and scale that row so that this element is $\newcommand{\PP}{\mathbb{P}}
\newcommand{\CC}{\mathbb{C}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\ZZ}{\mathbb{Z}}
\DeclareMathOperator{\Gr}{Gr}
\DeclareMathOperator{\Fl}{F\ell}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\inv}{inv}1$. Then subtract multiples of this row from the rows below it so that all the entries below that $1$ are $0$. Continue the process on all further rows:

$$\left(\begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 4 \\ 1 & 2 & -3\end{array}\right) \to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 1 & 0 & -6\end{array}\right)\to \left(\begin{array}{ccc} 0 & 1 & 1.5 \\ 1 & 0 & 2.5 \\ 0 & 0 & 1\end{array}\right)$$

It is easy to see that this process does not change the flag formed by the partial row spans, and that any two matrices in canonical form define different flags. So, the flag variety is covered by $n!$ open sets given by choosing a permutation and forming the corresponding canonical form. For instance, one such open set in the $5$-dimensional flag variety is the open set given by all matrices of the form
$$\left(\begin{array}{ccccc}
0 & 1 & \ast & \ast & \ast \\
1 & 0 & \ast & \ast & \ast \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & \ast & 0 \\
0 & 0 & 0 & 1 & 0 \end{array}\right)$$

We call this open set $X_{45132}$ because it corresponds to the permutation matrix formed by placing a $1$ in the $4$th column from the right in the first row, in the $5$th from the right in the second row, and so on. The maximum number of $\ast$’s we can have in such a matrix is when the permutation is $w_0=n(n-1)\cdots 3 2 1$, in which case the dimension of the open set $X_{12\cdots n}$ is $n(n-1)/2$ over $\CC$ — or $n(n-1)$ over $\RR$, since $\CC$ is two-dimensional over $\RR$. In general, the number of $\ast$’s in the set $X_w$ is the inversion number $\inv(w)$, the number of pairs of entries of $w$ which are out of order.

Finally, in order to paste these disjoint open sets together to form a smooth manifold, we consider the closures of the sets $X_w$ as a disjoint union of other $X_w$’s. The partial ordering in which $\overline{X_w}=\sqcup_{v\le w} X_v$ is called the Bruhat order, a famous partial ordering on permutations. (For a nice introduction to Bruhat order, one place to start is Yufei Zhao’s expository paper on the subject.)

Intersection Cohomology

Now suppose we wish to answer incidence questions about our flags: which flags satisfy certain given constraints? As in the case of the Grassmannians, this boils down to understanding how the Schubert cells $X_w$ intersect. This question is equaivalent to studying the cohomology ring of the flag variety $\Fl_n$ over $\mathbb{Z}$, where we consider the Schubert cells as forming a cell complex structure on the flag variety.

The cohomology ring $H^\ast(\Fl_n)$, as it turns out, is the coinvariant ring that we discussed in the last post! For full details I will refer the interested reader to Fulton’s book on Young tableaux. To give the reader the general idea here, the Schubert cell $X_w$ can be thought of as a cohomology class in $H^{2i}(\Fl_n)$ where $i=\inv(w)$. We call this cohomology class $\sigma_w$, and note that for the transpositions $s_i$ formed by swapping $i$ and $i+1$, we have $\sigma_{s_i}\in H^2(\Fl_n)$. It turns out that setting $x_i=\sigma_i-\sigma_{i+1}$ for $i\le n-1$ and $x_n=-\sigma_{s_{n-1}}$ gives a set of generators for the cohomology ring, and in fact $$H^\ast(\Fl_n)=\mathbb{Z}[x_1,\ldots,x_n]/(e_1,\ldots,e_n)$$ where $e_1,\ldots,e_n$ are the elementary symmetric polynomials in $x_1,\ldots,x_n$.