Schubert Calculus


Example 1.
Let’s compute $\newcommand{\PP}{\mathbb{P}}
\DeclareMathOperator{\GL}{GL}\sigma_{(1,1)}\cdot \sigma_{(2)}$ in $H^\ast(\Gr^2(\CC^4))$. The Littlewood-Richardson rule tells us that the only possible $\nu$ must be the $2\times 2$ square $(2,2)$, but there is no way to fill $\nu/\lambda$ with two $1$’s in a semistandard way. Therefore, $$\sigma_{(1,1)}\cdot \sigma_{(2)}=0.$$

Geometrically, this makes sense: $\Omega_{(1,1)}$ is the Schubert variety consisting of all $2$-dimensional subspaces of $\CC^4$ contained in a given $3$-dimensional subspace. $\Omega_{(2)}$ is the Schubert variety of all $2$-dimensional subspaces containing a given line through $0$. For a generic choice of the given $3$-dimensional subspace and line through the origin, there is plane satisfying both conditions.

Example 2.
Let’s try the same calculation, $\sigma_{(1,1)}\cdot \sigma_{(2)}$, in $H^\ast(\Gr^3(\CC^5))$. Now the Important Box is $2\times 3$, and so the partition $\nu=(3,1)$ is a possibility. Indeed, the Littlewood-Richardson rule gives us one possible filling of $\nu/\lambda$ with two $1$’s, and so we have
$$\sigma_{(1,1)}\cdot \sigma_{(2)}=\sigma_{(3,1)}.$$

Geometrically, this also makes sense: $\Omega_{(1,1)}$ is the Schubert variety consisting of all $2$-dimensional subspaces of $\CC^5$ contained in a given $4$-dimensional subspace. $\Omega_{(2)}$ is the Schubert variety of all $2$-dimensional subspaces intersecting a given plane through $0$ in at least a line. $\Omega_{(3,1)}$, then, is the variety of all $2$-dimensional subspaces contained in a given $4$-space and also containing a given line. Clearly the first and second conditions together are equivalent to the third in $\CC^5$.

Example 3.
We can now check in the least elegant possible way that there exists a unique line in projective space passing through two given points. In other words, through two given lines through $0$ in $\Gr^2(\CC^3)$, we wish to show there is exactly one plane in the intersection of the varieties $\Omega_{(1)}$ and $\Omega_{(1)}$ (for two different flags). Our Important Box is $2\times 1$, so we have $$\sigma_{(1)}\cdot \sigma_{(1)}=\sigma_{(1,1)}.$$ Indeed, $\Omega_{(1,1)}$ consists of a single plane in $\CC^3$.

Example 4.
We can now also solve problem 3. We wish to compute the product $\sigma_{(1)}^4$ in $H^\ast(\Gr^2(\CC^4))$. We have $\sigma_{(1)}^2=\sigma_{(1,1)}+\sigma_{(2)}$ by the Littlewood-Richardson rule. Since $\sigma_{(1,1)}\cdot \sigma_{(2)}=0$ as in Example 1, we have $$\sigma_{(1)}^4=(\sigma_{(1,1)}+ \sigma_{(2)})^2=\sigma_{(1,1)}^2+\sigma_{(2)}^2=2\sigma_{(2,2)}.$$
Thus there are exactly $2$ lines intersecting four given lines in $\PP^3$.

Example 5.
Finally, let’s solve problem 4 from the first page. The statement translates to proving a relation of the form $$\sigma_{(1)}^{r(m-r)}=c\cdot \sigma_{(n,n,\cdots, n)}$$ where $c$ is the desired number of $r$-planes and the Schubert class on the right hand side refers to the class of the Important Box.

First note that some relation of this form must hold, since any partition $\nu$ on the right hand side of the product must have size $r(m-r)$ and fit in the Important Box. The Box itself is the only such partition.

To compute $c$, we notice that it is the same as the coefficient of $s_{(n,n,\ldots,n)}$ in the product of Schur functions $s_{(1)}^{rn}$ in the ring $\Lambda(x_1,x_2,\ldots)$. We now introduce some more well-known facts and definitions from symmetric function theory. (See Stanley‘s or Sagan‘s book to learn about symmetric functions in detail.)

Define the monomial symmetric function $m_\lambda$ to be the sum of all monomials in $x_1,x_2,\ldots$ having exponents $\lambda_1,\ldots,\lambda_r$. Then it is not hard to see, from the combinatorial definition of Schur functions, that $$s_\lambda=\sum_{\mu} K_{\lambda\mu} m_\mu$$ where $K_{\lambda\mu}$ is the number of semistandard Young tableaux of shape $\lambda$ and content $\mu$. The numbers $K_{\lambda\mu}$ are called the Kostka numbers, and they can be thought of as a change of basis matrix in the space of symmetric functions.

The homogeneous symmetric function $h_\lambda$ is defined to be $h_{\lambda_1}\cdots h_{\lambda_r}$ where $h_d$ is the sum of all monomials of degree $d$ for any given $d$. The homogeneous symmetric functions also form a $\CC$-basis for $\Lambda(x_1,x_2,\ldots)$, and one can then define an inner product on $\Lambda$ such that $$\langle h_\lambda,m_\mu\rangle=\delta_{\lambda\mu},$$ i.e. the $h$’s and $m$’s are orthonormal. Remarkably, the $s_\lambda$’s are orthogonal with respect to this inner product: $$\langle s_\lambda,s_\mu\rangle=\delta_{\lambda\mu},$$ and so we have
\langle h_\mu,s_\lambda\rangle &=& \langle h_\mu, \sum_{\nu} K_{\lambda\nu} m_\nu\rangle\\
&=& \sum_{\nu} K_{\lambda\nu}\langle h_\mu, m_\nu \rangle \\
&=& \sum_{\nu} K_{\lambda\nu} \delta{\mu\nu} \\
&=& K_{\lambda\mu}
Thus we have the dual formula $$h_\mu=\sum_{\lambda}K_{\lambda\mu}s_\mu.$$

Returning to the problem, notice that $s_{(1)}=m_{(1)}=h_{(1)}=x_1+x_2+x_3+\cdots$. Thus $s_{(1)}^{rn}=h_{1}^{rn}=h_{(1,1,1,\ldots,1)}$ where the last vector has length $rn$. It follows from the formula above that the coefficient of $s_{(n,n,\cdots,n)}$ in $h_{(1,1,1,\ldots,1)}$ is the number of fillings of the Important Box with content $(1,1,1,\ldots,1)$, i.e. the number of Standard Young tableaux of Important Box shape.

There is a well-known and hard-to-prove theorem known as the hook length formula which will finish the computation.

Define the hook length $\DeclareMathOperator{\hook}{hook}\hook(s)$ of a square $s$ in a Young diagram to be the number of squares strictly to the right of it in its row plus the number of squares strictly below in its column plus $1$ for the square itself.
(Hook length formula.)
The number of standard Young tableaux of shape $\lambda$ is $$\frac{|\lambda|!}{\prod_{s\in \lambda}\hook(s)}.$$

Applying the hook length formula to the Important Box yields the total of $$\frac{(r(m-r))!\cdot (r-1)!\cdot (r-2)!\cdot \cdots\cdot 1!}{(m-1)!\cdot(m-2)!\cdot\cdots\cdot 1!}$$ standard fillings, as desired.

6 thoughts on “Schubert Calculus

  1. Hey Maria, thanks for posting this! I’ve been looking to learn a little bit about Schubert calculus, and this looks like a fantastic introduction – I’ll try to get around to reading it in detail this weekend.

    • Anyway – do you suggest any references for a systematic treatment of the subject? Preferably something that isn’t too technical (everything you use in your post would be fine, I think; I think I can fake an understanding of algebraic geometry up to a point).

      • Hey Carl, glad you found this useful! I have a few references scattered throughout the post, but I’d say the main one I learned from is Fulton’s book, “Young tableaux”. I’d say that one is pretty accessible for those who don’t have a strong background in algebraic geometry/topology, because he neatly pushes all the technical details about cohomology into the appendix. But then, the appendix is also detailed enough if you do want to understand those details.

        I’ve also heard good things about the new book by Eisenbud and Harris, “3264 and all that.” It can be found online here:
        and it certainly goes into much more detail on the geometric side of things, because it’s building up a more general theory than just intersections of Schubert varieties. I plan to read (parts of) it at some point soon.

  2. One combinatorial way to look into the Schubert decomposition is:
    1.We associate a matrix to each k-subset as you said.
    2.Then in each matrix consider the set of subsets of columns (numbered from left to right) which are bases.
    3.Put a total order on such sets by considering the induce lexicographic order.
    Each schubert cell then correspond to all subspaces with an specified subset of the columns being the maximal basis. So in the example of page 2 we want all matrices with the subset (2,4,7) being the biggest basis lexicographically. This is a complicated way of saying the same but now:
    – If we consider ALL possible orders of the columns (schubert is very dependent on ordering from left to right) and intersect them all, in the common refinement we would be grouping subspaces by specifying the complete set of bases. This is the Matroid Strata decomposition. Bad thing is, Mnevs Universality theorem says that such cells can be “as bad as possible”, and while Im never sure of the precise meaning, it certainly means that working with them is much worse than working with the schubert cells which are just affine spaces.
    -If we consider all cyclic orders and intersect them, the cells in the refinement are called the Positroid decomposition, and there are lots of combinatorics associated to them. In particular they have something called a Le diagram associated
    -Lauren Williams and Kelli Talaska consider something similar, the Go diagrams, to give a refinement of the positroid decomposition. The open cells are not bad, but their closures and how the intersect was (or maybe still is) a bit mysterious.

  3. Pingback: The CW complex structure of the Grassmannian | Mathematical Gemstones

  4. Hi Maria, thanks for the post and the link of the video lecture about Schubert calculus in your other post, they have been so helpful!
    I have one question which might be stupid. When we want to find which Schubert cell a point of the Grassmannian belongs to, we cut off the upside-down staircase from the matrix, all entries in the staircase should be 0, right? Why must the first column of the matrix be a zero column?

Leave a Reply

Your email address will not be published. Required fields are marked *