Polynomials and Representations XXV

Properties of the Young Symmetrizer

Recall that for a filling T, we have R(T), C(T) \le S_d, the subgroup of elements which take an element of the i-th row (resp. column) of T to the i-th row (resp. column) of T. Then:

a_T = \sum_{g\in R(T)} g,\quad b_T = \sum_{g\in C(T)} \chi(g)g,\quad c_T = a_T b_T,

where c_T is the Young symmetrizer. Recall the following results from earlier:

g\in R(T) \implies a_T g = g a_T = a_T, \\ g\in C(T) \implies b_T g = g b_T = \chi(g)b_T.

The following is obvious.

Lemma 1. If g\in S_d, then a_{g(T)} = g a_T g^{-1} and b_{g(T)} = g b_T g^{-1}. Thus c_{g(T)} = g c_T g^{-1}.


This follows from R(g(T)) = g R(T) g^{-1} and C(g(T)) = g C(T)g^{-1}. E.g. the latter gives:

b_{g(T)} = \sum_{x \in C(g(T))} \chi(x)x = \sum_{x\in C(T)} \chi(gxg^{-1}) gxg^{-1} = g b_T g^{-1}.


The following generalizes the Young symmetrizer.

Proposition 1. Consider the element c = a_T b_{T'} for fillings T, T' of shape \lambda. Note that since \mathbb{C}[G]c is contained in both \mathbb{C}[G]a_T and \mathbb{C}[G]b_{T'} it is either 0 or isomorphic to V_\lambda.

The following are equivalent.

  • c \ne 0;
  • no two distinct i,j lie in the same row of T and same column of T';
  • there exist g \in R(T) and g' \in C(T') such that g(T) = g'(T').


The third condition says: we can change T to T’ by permuting elements in each row, then elements in each column.


Suppose c\ne 0 but two distinct i, j lie in the same row of T and same column of T’. Let g\in S_d swap those two elements; then g\in R(T) \cap C(T'). Letting C run through a set of coset representatives of R(T)/\left<g\right>, we have a_T = (\sum_{x\in C} x)(1+g). But now we have gb_{T'} = \chi(g)b_{T'} = -b_{T'} so (1+g)b_{T'} = 0 and we have c=0.

Suppose the second condition holds. Elements of the first row of T are in different columns of T’. Bringing those in T’ to the first row, there exist g'_1\in C(T') and g_1\in R(T) such that g_1(T) and g_1'(T') have identical first rows.


Likewise, since the second row of g_1(T) are in different columns of g'_1(T') there exist g'_2 \in C(T') and g_2 \in R(T) such that g_2 g_1(T) and g'_2 g'_1(T') have the same first and second rows. Repeating, we eventually get the desired g\in R(T) and g'\in C(T').

Finally, suppose the third condition holds; let T'' = g(T) = g'(T'). By lemma 1,

a_{T''} = g a_T g^{-1} = a_T, \quad b_{T''} = g' b_T g'^{-1} = b_{T'},

So c=a_T b_{T'} = a_{T''}b_{T''} = c_{T''}\ne 0. ♦

Lemma 2. If T, T' are distinct SYT of the same shape, they do not satisfy the conditions of proposition 1.


Order all the fillings of shape \lambda as follows: given TT’, let k be the largest number occurring in different squares of T and T’; we write T’T if k occurs earlier in the word w(T’) than in w(T). For example, we have:


Note that for any SYT T we have:

g \in R(T), g' \in C(T) \implies g(T) \ge T, g'(T) \le T

since the largest entry of T which is moved by g must move to its left in w(T), and the largest entry moved by g’ moves to its right in w(T). Thus if TT’ are SYT and g(T) = g’(T’) for some g\in R(T), g'\in C(T'), we have T \ge g(T) = g'(T') \ge T' so TT’. ♦


Lemma 3. For any v\in \mathbb{C}[G], v is a multiple of c_T if and only if:

  • for any g\in R(T) and h \in C(T), we have gvh = \chi(h)v.


⇒ is left as an easy exercise, by proving gv = v\chi(h)h = v.

For ⇐, write v= \sum_{x\in G} \alpha_x x where \alpha_x \in \mathbb{C}. Then \alpha_{gxh} = \chi(h)\alpha_x for any x\in G, g\in R(T) and h\in C(T). Taking x=e, it suffices to show: if x\in G is not of the form gh for g\in R(T), h\in C(T), then \alpha_x = 0.

For that, consider the filling T' := x(T). We claim that T, T' satisfy the conditions of proposition 1.

  • Indeed, if g(T) = g'(T') for some g\in R(T) and g'\in C(T'), then

x=g'^{-1}g, \quad g' \in C(T') = C(xT) = x C(T)x^{-1}.

  • So x^{-1} g' x = g^{-1} g' g \in C(T) and we have x = g(g^{-1}g'g)^{-1} \in R(T)C(T) which is a contradiction.

Hence there exist distinct ij in the same row of T and same column of T’. Let t be the transposition (ij); then t \in R(T) so tv = v by the given condition. Since t \in C(T') = x C(T) x^{-1} so t' := x^{-1}tx satisfies vt'=\chi(t') v = -v. Now consider the coefficient of tx in v. We have:

\begin{aligned} tv = v&\implies \alpha_x = \alpha_{tx},\\ vt' = -v &\implies \alpha_{x} = -\alpha_{tx}.\end{aligned}

Thus \alpha_x = 0 as desired. ♦


Here is one immediate application of lemma 2.

Proposition 2. For any w \in \mathbb{C}[G], c_T w c_T is a scalar multiple of c_T.

In the case w=e we have c_T^2 = nT where n = \frac {d!}{\dim V_\lambda}.


Indeed if g\in R(T) and h\in C(T) we have:

g(c_T w c_T) h = (gc_T) w(c_T h) = (c_T)w(\chi(h) c_T) = \chi(h) c_T w c_T

so by lemma 2, c_T w c_T is a scalar multiple of c_T.

For c_T^2, consider right-multiplication on \mathbb{C}[G] by c_T and let A be its trace. This map takes g\mapsto gc_T where the coefficient of g is 1; taking the basis comprising of elements of G, we have A = d!. On the other hand, the image of this map is \mathbb{C}[G]c_T \cong V_\lambda and takes each vc_T \mapsto vc_T^2 = nvc_T. Thus the map is a scalar map n on V_\lambda and we have n\cdot \dim V_\lambda = d!. ♦

This gives the following.

Theorem. We have, as a direct sum of irreps:

\displaystyle \mathbb{C}[S_d] = \bigoplus_{\lambda \vdash d} \bigoplus_{\substack{T = SYT \\ \text{of shape }\lambda}} \mathbb{C}[S_d] c_T.


Let us show that \sum_T \mathbb{C}[S_d] c_T is a direct sum, where T runs over all SYT of shape \lambda. Indeed, if not we have:

\overbrace{v_1 c_{T_1}}^{\ne 0} + v_2 c_{T_2} + \ldots + v_m c_{T_m} = 0,\quad (*)

for some distinct SYT T_1, \ldots, T_m and some v_1, \ldots, v_m \in \mathbb{C}[G]. Note that if T\ne T' then by lemma 2, T and T’ do not satisfy the conditions in proposition 1, so we can find distinct ij in the same row of T and same column of T’; using the same technique as the first paragraph of proof of proposition 1, we have b_{T} a_{T'} =0 and thus c_T c_{T'} = 0.

Right-multiplying (*) by c_{T_1} thus gives 0 =v_1 c_{T_1}^2 = n v_1 c_{T_1} which gives v_1 c_{T_1} = 0, a contradiction. Hence, the inner sum is a direct sum. The entire sum is a direct sum since there is no common irrep between distinct \lambda.

Hence, the RHS is a subspace of the LHS. Now we count the dimension: on LHS we get d!; on RHS we get \sum_\lambda f_\lambda\dim V_\lambda = \sum_\lambda f_\lambda^2 = d! so the two sides are equal. ♦


This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s