Writing polynomials in terms of invariants and coinvariants

It turns out that there is a nice explicit basis for $\mathbb{C}[x_1,\ldots,x_n]/(p_1,\ldots,p_n)$: the Vandermonde determinant $\Delta$ and all its partial derivatives. The $n\times n$ Vandermonde determinant is the determinant of the following matrix:
1 & 1 & \cdots & 1 \\
x_1 & x_2 & \cdots & x_n \\
\vdots & \vdots & \ddots & \vdots \\
x_1^{n-1} & x_2^{n-1} & \cdots & x_n^{n-1}

Explicitly, it equals $\Delta=\prod_{i < j}(x_j-x_i)$. So in the case of two variables, $\Delta=x-y$, and in the case of three variables, $\Delta=(x-y)(y-z)(x-z)$. Now, consider all partial derivatives of $\Delta$, that is, all polynomials of the form $$\frac{\partial}{\partial x_{i_1}}\frac{\partial}{\partial x_{i_2}}\cdots\frac{\partial}{\partial x_{i_k}} \Delta$$ for some $k$. The claim is that a subset of these form a $\mathbb{C}$-basis for the ring of coinvariants; to do so, we need to show that there are enough of them that are mutually linearly independent. We begin by computing the dimension of the $d$th graded piece of the coinvariant ring for all $d$. Consider again the decomposition $$\mathbb{C}[x_1,\ldots,x_n]=\mathbb{C}[p_1,\ldots,p_n]\otimes_{\mathbb{C}[S_n]}\mathbb{C}[x_1,\ldots,x_n]/(p_1,\ldots,p_n),$$ and consider the Hilbert series of both sides. (Recall that the Hilbert series of a graded ring $R=\oplus_d R_d$ is the generating function $H_R(q)=\sum_d \dim_{\mathbb{C}}(R_d) q^d$.) On the left hand side, it is a simple combinatorial exercise to see that the Hilbert series of $\mathbb{C}[x_1,\ldots,x_n]$ is $$\frac{1}{(1-q)^n}.$$ On the right hand side, recall that the bases for the ring of symmetric functions in $n$ variables are indexed by partitions with at most $n$ parts. So if $p_n(d)$ denotes the number of partitions of $d$ having at most $n$ parts, then the Hilbert series of $\mathbb{C}[p_1,\ldots,p_n]$ is the generating function $$\sum_d p(d)q^d=\frac{1}{(1-q)(1-q^2)\cdots(1-q^n)}.$$ Finally, since Hilbert series are multiplicative across tensor products, we have the equation $$\frac{1}{(1-q)^n}=\frac{1}{(1-q)(1-q^2)\cdots(1-q^n)}\operatorname{Hilb}(\mathbb{C}[x_1,\ldots,x_n]/(p_1,\ldots,p_n)).$$ Solving, we find $$\operatorname{Hilb}(\mathbb{C}[x_1,\ldots,x_n]/(p_1,\ldots,p_n))=\frac{(1-q)(1-q^2)\cdots(1-q^n)}{(1-q)^n}=(n)_q!,$$ which is the $q$-factorial number. This means that, in particular, the highest degree elements in the coinvariant ring have degree $\binom{n}{2}$, and this degree component has dimension $1$. Thus the Vandermonde determinant $\Delta$ spans the top degree component, and as an $S_n$-module it is a single copy of the sign representation. (The Hilbert series formula also immediately implies that the dimension of the entire coinvariant ring is $n!$.)

Finally, to show that the partial derivatives of $\Delta$ span the ring of coinvariants, we consider its dual correspondence with the ring of harmonics. Consider the inner product on $\mathbb{C}[x_1,\ldots,x_n]$ given by $$\langle f,g \rangle=(\partial_f(g))_0,$$ where $\partial_f$ denotes the differential operator formed by replacing any $x_i$ in the polynomial $f$ by $\frac{\partial}{\partial x_i}$, and the subscript $0$ indicates that we take the constant term. This is a well-defined inner product having an orthogonal basis consisting of the monomials. Under this inner product, we can consider the orthogonal subspace to the ideal $I=(p_1,\ldots,p_n)$.

The orthogonal subspace can be described explicitly as the space of all functions $f$ which are killed by the “power sum operators” $\frac{\partial^k}{\partial x_1^k}+\cdots+\frac{\partial^k}{\partial x_n^k}$ for all $k$. This space is called the ring of harmonics, and is isomorphic to the ring of coinvariants since it is the orthogonal space to the ideal $I$ that we quotient by.

Finally, since the Vandermonde determinant $\Delta$ is in the ring of harmonics, all its partial derivatives are also elements of it since the partial derivative operators commute. Moreover, if we consider only the partial derivatives of the form $$\frac{\partial^{a_1}}{\partial x_1^{a_1}}\cdots \frac{\partial^{a_n}}{\partial x_n^{a_n}}\Delta$$ in which $a_i\le i-1$ for all $i$, then there are exactly $n!$ of them, and their leading terms in lexicographic order are all distinct. Thus they are independent and generate the ring of harmonics, which has dimension $n!$.

(Combinatorial Bonus: Notice that this choice of partial derivatives corresponds with the Carlitz Codes described in this post, which are $q$-counted by the $q$-factorial.)

To sum up, it is possible to generalize the nice decomposition of polynomials in two variables into a symmetric and antisymmetric component, by using $n!$ components for the $n$-variable case. These components correspond to the $n!$ partial derivatives of $\Delta=\prod_{i < j} (x_i-x_j)$ that come from the Carliz codes $a_1,\ldots,a_n$ with $a_i\le i-1$ for each $i$. Mystery solved!

6 thoughts on “Writing polynomials in terms of invariants and coinvariants

      • Just discovered this blog– very nice!

        Expanding on what Allen K. said about Schubert polynomials, I like to think of at least three different well-known bases for the polynomial ring in n variables over the symmetric polynomials which model the q-factorial in 3 ways:

        (1) The Schubert polynomials model the q-factorial as the generating function for permutations w by inv(w), the inversion number or Coxeter group length.

        (2) Garsia’s “descent monomial basis” model the q-factorial as the generating function for permutations w by maj(w), the major index.

        (3) Then there is what Garsia calls “the Artin basis”, that is the set of all monomials in x_1,…,x_n for which the degree of x_i is at most i-1. They appear in E. Artin’s old slim paperback book “Galois theory”, when he proves the fundamental theorem of symmetric functions. And these monomials model the q-factorial as a product of q-numbers, similarly to your partial derivatives of the Vandermonde product.

        • Thanks for the detailed reply! I’m sorry it took me so long to see that you commented and approve it. You should be able to comment freely from now on. 🙂


  1. I think I’m missing something – why are we tensoring with the base ring $\mathbb{C}[S_n]$? For instance, wouldn’t that make $(x+y)(x+2y) = (-1 \cdot (x+y))\otimes (x+2y) = (x+y)\otimes (-1\cdot (x+2y)) = (x+y)(2x+y)$? (where $-1$ is the nonidentity element of $S_2$ considered inside $\mathbb{C}[S_2]$.

    By the way, your blog is awesome!

Leave a Reply

Your email address will not be published. Required fields are marked *