This little gemstone is hidden within the folds of algebraic combinatorics, but certainly deserves its name. Just as the Fundamental Theorem of Arithmetic gives us a way of writing common objects (numbers) in a canonical form (prime factorization), the Fundamental Theorem of Symmetric Function Theory allows us to express any symmetric function in a useful canonical form.
First, some background on symmetric functions. Like generating functions, symmetric functions arise naturally in combinatorics, and the coefficients are often the point of interest. In the poetic words of Herbert Wilf in his book generatingfunctionology:
A generating function is a clothesline on which we hang up a sequence of numbers for display.
I claim, and aim to demonstrate here, that
A symmetric function is a web of clotheslines on which we hang up a collection of numbers that are indexed by partitions.
Okay, so the clothesline might be a bit more complicated in this case. But it is much more useful for areas of mathematics in which partitions play an important role.
Let’s be more rigorous. A symmetric function (over $\mathbb{Q}$) is a polynomial or series in many variables with rational coefficients which is invariant (unchanging) under permutations of the variables. For instance, $x^2 + y^2 + z^2 + 2xyz$ is a symmetric function in three variables, but $x^2 y + y^2 z + z^2 x$ is not, because interchanging $x$ and $y$ does not leave the function unchanged.
I find that writing out a symmetric function explicitly as a polynomial, like those above, is similar to expressing a number in base ten – sometimes it is useful, but other times you need its prime factorization. The “primes” of symmetric function theory are the elementary symmetric functions.
First, define $e_n(x_1,\ldots,x_r)=m_{(1^n)}=\sum_{i_1<\cdots<i_n} x_{i_1}\cdots x_{i_n}.$ For instance,
$$e_3(a,b,c,d)=abc+abd+acd+bcd.$$ Let $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_k)$ be a partition, that is, a finite sequence of positive integers written in nonincreasing order. Then the elementary symmetric function corresponding to $\lambda$ is defined to be the product
$$e_\lambda=e_{\lambda_1}e_{\lambda_2}\cdots e_{\lambda_k}.$$ The remarkable fact is that the symmetric functions $e_\lambda$ form a basis for the $\mathbb{Q}$-module of symmetric functions! In other words:
Fundamental Theorem of Symmetric Function Theory: Every symmetric function can be written uniquely in the form $\sum_{\lambda} c_{\lambda}e_\lambda$, where each $c_\lambda\in \mathbb{Q}$.
So, the $e_\lambda$`s are a clothesline of sorts, indexed by the partitions, on which to hang coefficients.
For example, let’s try to write the symmetric function $x^3+y^3+z^3$ in terms of elementary symmetric functions. We have the functions $e_1=x+y+z$, $e_2=xy+yz+zx$, and $e_3=xyz$ to work with. We can start by cubing $e_1$:
$$e_1^3=x^3+y^3+z^3+3(x^2y+y^2x+x^2z+z^2x+y^2z+z^2y)+6xyz$$ We can replace the $6xyz$ with $6e_3$. We now only need to write the term $3(x^2y+y^2x+x^2z+z^2x+y^2z+z^2y)$ in terms of $e_i$`s. The product $3e_1e_2$ is pretty close, but leaves us with an extra $9xyz$ that we need to subtract out, giving us $3e_1e_2-9e_3$ for the middle term. Putting this all together and solving for $x^3+y^3+z^3$, we find:
$$x^3+y^3+z^3=e_1^3-3e_1e_2+3e_3=e_{(1,1,1)}-3e_{(2,1)}+3e_{(3)}.$$ It is fairly clear that a similar process can be used to express any symmetric function in terms of $e_{\lambda}$`s. But why is it unique?
Thanks, Maria, for an interesting look into some mathematics I probably wouldn’t have otherwise encountered!
I noticed a one error though, at the end of the first page, you state that we’re going to write x^2 + y^2 + z^2 in terms of elementary symmetric functions, then proceed to write x^3 + y^3 + z^3 in terms of its elementary symmetric functions.
Also, I was confused when you said, “We can then write the term x^2 y + y^2 x + … as e1e2 – 3e3”, wherein I took the term to be x^2 y + y^2 x + … + z^2 y rather than x^2 y + y^2 x + … + 6xyz, which rather confused me until I worked it out, so that may merit some clarification.
Thanks for your comments, Zach! I edited the error you found.
As for the paragraph you were confused about, I’m pretty sure the mathematics was correct as stated (I didn’t mean to include the 6xyz term) but you are right that the paragraph was badly worded. I re-worded it above and included more detail – does it make more sense now?
Glad you enjoyed it!
Yes, I think it reads much more clearly now. Thanks, Maria!
Hi Maria,
This is a nice introduction to symmetric functions. However, do you think this theorem deserves to be called the “fundamental theorem” of symmetric functions? I don’t see any particularly strong reason to favour the e’s over the h’s or p’s, and if I were to call any symmetric functions “fundamental” it would be the Schur functions.
Also, I didn’t know this argument was due to Mark Haiman! It appears without attribution in section 7.4 of Volume 2 of Stanley’s Enumerative Combinatorics.
Hi Steven,
First, you are right that the same proof appears in Stanley’s book – I probably should have been more clear about my attribution. I didn’t mean that it is due to Mark Haiman; I meant that he was the first to teach that proof to me. I will edit my post to make that more clear.
Now, about the Fundamental Theorem… while you’re right that the Schur functions are the more “important” basis for the applications in representation theory, I do think that the elementary symmetric functions are called “elementary” or “fundamental” for good reason.
First, they are, up to sign, the coefficients of a polynomial in terms of its roots. This is a rather elementary fact, but I think it comes up often enough to make them stand out amongst the other symmetric functions.
Now, the p’s also often come up when studying polynomials or looking for a simple basis, but there is a second, deeper reason why the e’s are more fundamental than the p’s. Suppose we are working over a ring R that does not contain $mathbb{Q}$, such as $mathbb{Z}$. So, we are considering the R-module of symmetric functions $Lambda_R(x_1,ldots,x_n)$. Then the p’s actually don’t form a basis for these symmetric functions! The reason is that when you express the p’s in terms of the m’s, the matrix is upper triangular, but the diagonal entries are not all $pm 1$, and so you would need fractions in order to express the m’s in terms of the p’s.
One could argue that the h’s are exactly as fundamental as the e’s in this sense, but from an algebraic standpoint the h’s are essentially equivalent to the e’s, since the involution $omega$ sends one to the other. So, I’d say that it would be a restatement of the Fundamental Theorem to say that the h’s form a basis.
I can see why you’d think the Schur functions are more fundamental, but to me that seems like saying that the fact that every positive integer can be written as a sum of four squares should be called the Fundamental Theorem of Arithmetic. The Schur functions are important for certain deeper areas of mathematics, but aren’t so easy to deal with or understand as the elementary symmetric functions.
That being said, I wasn’t the one to name it the Fundamental Theorem – those are just my thoughts on the matter. I’d be interested to hear if anyone else has a better, or more historically accurate, reason for the name.
-Maria
Pingback: Theme and variations: the Newton-Girard identities | Finding Gemstones
Pingback: The hidden basis | Finding Gemstones
Pingback: Addicted to Crystal Math | Mathematical Gemstones