Perhaps the simplest and most essential type of symmetric functions are the **monomial symmetric functions**. Then we define $m_\lambda$ to be the smallest symmetric function containing the monomial $x_1^{\lambda_1}x_2^{\lambda_2}\cdots x_k^{\lambda_k}$ as a term.

For example, the monomial symmetric function $m_{(2,1)}$ in three variables is

$$m_{(2,1)}=x^2 y + y^2 x + x^2 z + z^2 x + y^2 z + z^2 y,$$ since it is the smallest symmetric function containing the monomial $x^2y$ as a term. (Notice that the meaning changes with the number of variables: in two variables we have $m_{(2,1)}=x^2y+y^2x$, and in only one variable, $m_{(2,1)}=0$.)

The monomial symmetric functions are also enough to describe every symmetric function:

Every symmetric function can be written uniquely in the form $\sum_{\lambda} a_{\lambda}m_\lambda$, where each $a_\lambda\in \mathbb{Q}$.

For instance, we can write

$$x^3y+ y^3x+2xy-3x^4-3y^4=m_{(3,1)}+2m_{(1,1)}-3m_{(4)}.$$

Notice that the coefficients of the terms having the same multiset of exponents, such as $x^3y$ and $y^3x$, must be equal for the function to be symmetric. It follows that every symmetric function is uniquely a sum of constant multiples of $m_\lambda$`s.

We can now prove the Fundamental Theorem by expressing the $e_\lambda$`s of degree $d$ in terms of the $m_\lambda$`s of degree $d$, and showing that the transition matrix is invertible. (Thanks to Mark Haiman for introducing me to this proof, which appears on the next page.)

Thanks, Maria, for an interesting look into some mathematics I probably wouldn’t have otherwise encountered!

I noticed a one error though, at the end of the first page, you state that we’re going to write x^2 + y^2 + z^2 in terms of elementary symmetric functions, then proceed to write x^3 + y^3 + z^3 in terms of its elementary symmetric functions.

Also, I was confused when you said, “We can then write the term x^2 y + y^2 x + … as e1e2 – 3e3”, wherein I took the term to be x^2 y + y^2 x + … + z^2 y rather than x^2 y + y^2 x + … + 6xyz, which rather confused me until I worked it out, so that may merit some clarification.

Thanks for your comments, Zach! I edited the error you found.

As for the paragraph you were confused about, I’m pretty sure the mathematics was correct as stated (I didn’t mean to include the 6xyz term) but you are right that the paragraph was badly worded. I re-worded it above and included more detail – does it make more sense now?

Glad you enjoyed it!

Yes, I think it reads much more clearly now. Thanks, Maria!

Hi Maria,

This is a nice introduction to symmetric functions. However, do you think this theorem deserves to be called the “fundamental theorem” of symmetric functions? I don’t see any particularly strong reason to favour the e’s over the h’s or p’s, and if I were to call any symmetric functions “fundamental” it would be the Schur functions.

Also, I didn’t know this argument was due to Mark Haiman! It appears without attribution in section 7.4 of Volume 2 of Stanley’s Enumerative Combinatorics.

Hi Steven,

First, you are right that the same proof appears in Stanley’s book – I probably should have been more clear about my attribution. I didn’t mean that it is due to Mark Haiman; I meant that he was the first to teach that proof to me. I will edit my post to make that more clear.

Now, about the Fundamental Theorem… while you’re right that the Schur functions are the more “important” basis for the applications in representation theory, I do think that the elementary symmetric functions are called “elementary” or “fundamental” for good reason.

First, they are, up to sign, the coefficients of a polynomial in terms of its roots. This is a rather elementary fact, but I think it comes up often enough to make them stand out amongst the other symmetric functions.

Now, the p’s also often come up when studying polynomials or looking for a simple basis, but there is a second, deeper reason why the e’s are more fundamental than the p’s. Suppose we are working over a ring R that does not contain $mathbb{Q}$, such as $mathbb{Z}$. So, we are considering the R-module of symmetric functions $Lambda_R(x_1,ldots,x_n)$. Then the p’s actually

don’tform a basis for these symmetric functions! The reason is that when you express the p’s in terms of the m’s, the matrix is upper triangular, but the diagonal entries are not all $pm 1$, and so you would need fractions in order to express the m’s in terms of the p’s.One could argue that the h’s are exactly as fundamental as the e’s in this sense, but from an algebraic standpoint the h’s are essentially equivalent to the e’s, since the involution $omega$ sends one to the other. So, I’d say that it would be a restatement of the Fundamental Theorem to say that the h’s form a basis.

I can see why you’d think the Schur functions are more fundamental, but to me that seems like saying that the fact that every positive integer can be written as a sum of four squares should be called the Fundamental Theorem of Arithmetic. The Schur functions are important for certain deeper areas of mathematics, but aren’t so easy to deal with or understand as the elementary symmetric functions.

That being said, I wasn’t the one to name it the Fundamental Theorem – those are just my thoughts on the matter. I’d be interested to hear if anyone else has a better, or more historically accurate, reason for the name.

-Maria

Pingback: Theme and variations: the Newton-Girard identities | Finding Gemstones

Pingback: The hidden basis | Finding Gemstones

Pingback: Addicted to Crystal Math | Mathematical Gemstones