There is a fun little fact regarding polynomials in two variables $x$ and $y$:
Any two-variable polynomial $f(x,y)$ can be uniquely written as a sum of a symmetric polynomial and an antisymmetric polynomial.
(To be more precise, this is true for polynomials over any field of characteristic not equal to $2$. For simplicity, in what follows we will assume that our polynomials have coefficients in $\mathbb{C}$.)
Recall that a polynomial $g$ is symmetric if it does not change upon permuting its variables. In this case, with two variables, $g(x,y)=g(y,x)$. It is antisymmetric if swapping any two of the variables negates it, in this case $g(x,y)=-g(y,x)$.
It is not hard to prove the fact above. To show existence of the decomposition, set $g(x,y)=\frac{f(x,y)+f(y,x)}{2}$ and $h(x,y)=\frac{f(x,y)-f(y,x)}{2}$. Then $$f(x,y)=g(x,y)+h(x,y),$$ and $g$ is symmetric while $h$ is antisymmetric. For instance, if $f(x,y)=x^2$, then we can write $$x^2=\frac{x^2+y^2}{2}+\frac{x^2-y^2}{2}.$$
For uniqueness, suppose $f(x,y)=g_0(x,y)+h_0(x,y)$ where $g_0$ is symmetric and $h_0$ is antisymmetric. Then $g_0+h_0=g+h$, and so $$g_0-g=h-h_0.$$ The left hand side of this equation is symmetric and the right hand side is antisymmetric, and so both sides must be identically zero. This implies that $g_0=g$ and $h_0=h$, so the unique decomposition is $f=g+h$. QED.
This got me thinking…
Is there an analogous decomposition for polynomials in three variables? Or any number of variables?
The above decomposition doesn’t make sense in three variables, but perhaps every polynomial in $x$, $y$, and $z$ can be written uniquely as a sum of a symmetric, antisymmetric, and… some other particular type(s) of polynomials.
Indeed, it can be generalized in the following sense. Notice that any antisymmetric polynomial in two variables is divisible by $x-y$, since setting $x=y$ gives us $f(x,x)=-f(x,x)=0$. Moreover, dividing by $x-y$ gives a symmetric polynomial: if $$h(x,y)=p(x,y)\cdot(x-y)$$ is antisymmetric, then $p(x,y)\cdot (x-y)=-p(y,x)\cdot(y-x)$, and so $p(x,y)=p(y,x)$.
Thus any antisymmetric polynomial $h$ is equal to $x-y$ times a symmetric polynomial, and so we can restate our fact above in the following way:
Any two variable polynomial $f(x,y)$ can be written uniquely as a linear combination of $1$ and $x-y$, using symmetric polynomials as the coefficients.
For instance, $f(x,y)=x^2$ can be written as $$x^2=\left(\frac{x^2+y^2}{2}\right)\cdot 1+\left(\frac{x+y}{2}\right)\cdot (x-y).$$
Now, to generalize this to three variables, in place of $x-y$, we consider the polynomial $$\Delta=(x-y)(x-z)(y-z).$$ Also consider the five polynomials:
- $x^2-z^2+2yz-2xy$,
- $z^2-y^2+2xy-2xz$,
- $x-y$,
- $y-z$, and
- $1$,
each of which is obtained by taking certain partial derivatives starting with $\Delta$. It turns out that every polynomial in three variables can be decomposed uniquely as a linear combination of these six polynomials, using symmetric polynomials as the coefficients!
Where do these six polynomials come from? Turn to the next page to find out…
I just realized I forgot to reference Francois Bergeron’s book for the proof on the last page of this post: see http://www.amazon.com/Algebraic-Combinatorics-Coinvariant-Mathematics-Mathematiques/dp/1568813244, in the chapter on coinvariants.
I’d much rather use the basis of Schubert polynomials!
Thanks for pointing that out Allen – I had forgotten about the connection with the flag variety. Perhaps I’ll post about Schubert polynomials next week or the week after.
Just discovered this blog– very nice!
Expanding on what Allen K. said about Schubert polynomials, I like to think of at least three different well-known bases for the polynomial ring in n variables over the symmetric polynomials which model the q-factorial in 3 ways:
(1) The Schubert polynomials model the q-factorial as the generating function for permutations w by inv(w), the inversion number or Coxeter group length.
(2) Garsia’s “descent monomial basis” model the q-factorial as the generating function for permutations w by maj(w), the major index.
(3) Then there is what Garsia calls “the Artin basis”, that is the set of all monomials in x_1,…,x_n for which the degree of x_i is at most i-1. They appear in E. Artin’s old slim paperback book “Galois theory”, when he proves the fundamental theorem of symmetric functions. And these monomials model the q-factorial as a product of q-numbers, similarly to your partial derivatives of the Vandermonde product.
Thanks for the detailed reply! I’m sorry it took me so long to see that you commented and approve it. You should be able to comment freely from now on. 🙂
Maria
I think I’m missing something – why are we tensoring with the base ring $\mathbb{C}[S_n]$? For instance, wouldn’t that make $(x+y)(x+2y) = (-1 \cdot (x+y))\otimes (x+2y) = (x+y)\otimes (-1\cdot (x+2y)) = (x+y)(2x+y)$? (where $-1$ is the nonidentity element of $S_2$ considered inside $\mathbb{C}[S_2]$.
By the way, your blog is awesome!
Pingback: Higher specht polynomials | Mathematical Gemstones