# The combinatorics behind the rule: symmetric function theory

The Littlewood-Richardson and Pieri rules come up in symmetric function theory as well, and the combinatorics is much easier to deal with in this setting.

*symmetric functions*in infinitely many variables $\newcommand{\PP}{\mathbb{P}}

\newcommand{\CC}{\mathbb{C}}

\newcommand{\RR}{\mathbb{R}}

\newcommand{\ZZ}{\mathbb{Z}}

\DeclareMathOperator{\Gr}{Gr}

\DeclareMathOperator{\Fl}{Fl}

\DeclareMathOperator{\GL}{GL}x_1,x_2,\ldots$ is the ring $$\Lambda(x_1,x_2,\ldots)=\CC[x_1,x_2,\ldots]^{S_\infty}$$ of formal power series having bounded degree which are symmetric under the action of the infinite symmetric group on the indices.

For instance, $x_1^2+x_2^2+x_3^2+\cdots$ is a symmetric function, because interchanging any two of the indices does not change the series.

The most important symmetric functions in this context are the *Schur functions*. They can be defined in many equivalent ways, from being characters of irreducible representations of $\GL_n$ to an expression as a ratio of determinants. We use the combinatorial definition here, since it is most relevant to this context.

The *Schur functions* are the symmetric functions defined by

$$s_\lambda=\sum_{T} x^T$$ where the sum ranges over all SSYT’s $T$ of shape $\lambda$.

It is known that the Schur functions are symmetric, they form a basis of $\lambda_\CC$, and they satisfy the Littlewood-Richardson rule: (see Fulton)

$$s_\lambda\cdot s_\mu=\sum_{\nu} c^{\nu}_{\lambda\mu} s_\nu$$ the only difference being that here, the sum is *not* restricted by any Important Box. It follows that there is a surjective ring homomorphism

$$\Lambda(x_1,x_2,\ldots)\to H^\ast(\Gr^n(\CC^m)))$$

sending $s_\lambda\mapsto \sigma_\lambda$ if $\lambda$ fits inside the Important Box, and $s_\lambda\mapsto 0$ otherwise.

In particular, this means that any relation involving symmetric functions translates to a relation on $H^\ast(\Gr^n(\CC^m))$. This connection makes the combinatorial study of symmetric functions an essential tool in Schubert calculus.

Hey Maria, thanks for posting this! I’ve been looking to learn a little bit about Schubert calculus, and this looks like a fantastic introduction – I’ll try to get around to reading it in detail this weekend.

Anyway – do you suggest any references for a systematic treatment of the subject? Preferably something that isn’t too technical (everything you use in your post would be fine, I think; I think I can fake an understanding of algebraic geometry up to a point).

Hey Carl, glad you found this useful! I have a few references scattered throughout the post, but I’d say the main one I learned from is Fulton’s book, “Young tableaux”. I’d say that one is pretty accessible for those who don’t have a strong background in algebraic geometry/topology, because he neatly pushes all the technical details about cohomology into the appendix. But then, the appendix is also detailed enough if you do want to understand those details.

I’ve also heard good things about the new book by Eisenbud and Harris, “3264 and all that.” It can be found online here: http://isites.harvard.edu/fs/docs/icb.topic720403.files/book.pdf

and it certainly goes into much more detail on the geometric side of things, because it’s building up a more general theory than just intersections of Schubert varieties. I plan to read (parts of) it at some point soon.

One combinatorial way to look into the Schubert decomposition is:

1.We associate a matrix to each k-subset as you said.

2.Then in each matrix consider the set of subsets of columns (numbered from left to right) which are bases.

3.Put a total order on such sets by considering the induce lexicographic order.

Each schubert cell then correspond to all subspaces with an specified subset of the columns being the maximal basis. So in the example of page 2 we want all matrices with the subset (2,4,7) being the biggest basis lexicographically. This is a complicated way of saying the same but now:

– If we consider ALL possible orders of the columns (schubert is very dependent on ordering from left to right) and intersect them all, in the common refinement we would be grouping subspaces by specifying the complete set of bases. This is the Matroid Strata decomposition. Bad thing is, Mnevs Universality theorem says that such cells can be “as bad as possible”, and while Im never sure of the precise meaning, it certainly means that working with them is much worse than working with the schubert cells which are just affine spaces.

-If we consider all cyclic orders and intersect them, the cells in the refinement are called the Positroid decomposition, and there are lots of combinatorics associated to them. In particular they have something called a Le diagram associated

-Lauren Williams and Kelli Talaska consider something similar, the Go diagrams, to give a refinement of the positroid decomposition. The open cells are not bad, but their closures and how the intersect was (or maybe still is) a bit mysterious.

Pingback: The CW complex structure of the Grassmannian | Mathematical Gemstones

Hi Maria, thanks for the post and the link of the video lecture about Schubert calculus in your other post, they have been so helpful!

I have one question which might be stupid. When we want to find which Schubert cell a point of the Grassmannian belongs to, we cut off the upside-down staircase from the matrix, all entries in the staircase should be 0, right? Why must the first column of the matrix be a zero column?