Polynomials and Representations III

Complete Symmetric Polynomials

Corresponding to the elementary symmetric polynomial, we define the complete symmetric polynomials in \Lambda_n to be:

h_0=1, \qquad h_k = \sum_{1 \le i_1 \le i_2 \le \ldots \le i_k \le n} x_{i_1} x_{i_2} \ldots x_{i_k} = \sum_{a_1 + \ldots + a_n = k} x_1^{a_1} x_2^{a_2} \ldots x_n^{a_n}.

For example when n=3, we have:

\begin{aligned} h_3 &= x_1^3 + x_2^3 + x_3^3 + x_1^2 x_2 + x_1 x_2^2 + x_2^2 x_3 + x_2 x_3^2 + x_1^2 x_3 + x_1 x_3^2 + x_1 x_2 x_3\\ &= m_3 + m_{21} + m_{111}. \end{aligned}

Thus, written as a sum of monomial symmetric polynomials, we have h_k = \sum_{\lambda\vdash k} m_\lambda. Note that while the elementary symmetric polynomials only go up to e_n, the complete symmetric polynomial h_k is defined for all k. Finally, we define as before:

Definition. If \lambda is any partition, we define:

h_\lambda := h_{\lambda_1} h_{\lambda_2} \ldots h_{\lambda_l},

assuming \lambda_{l+1} = 0.

Proceeding as before, let us write h_\lambda as in terms of  the monomial symmetric polynomials m_\mu.

Theorem. We have:

h_\lambda = \sum_{\mu} M_{\lambda\mu} m_\mu

where M_{\lambda\mu} is the number of matrices a_{ij} with non-negative integer entries such that \sum_i a_{ij} = \lambda_j for each j and \sum_j a_{ij} = \mu_i for each i.

Proof

The proof proceeds as earlier. Let us take the example of \lambda = (2, 2, 2) and \mu = (3, 2, 1). Multiplying h_2 h_2 h_2, we pick the following terms to obtain the product x_1^3 x_2^2 x_3.

\begin{aligned} h_2 &= \boxed{x_1^2} + x_2^2 + \ldots + x_1 x_2 + x_1 x_3 + x_2 x_3 + \ldots \\ h_2 &= x_1^2 + x_2^2 + \ldots + \boxed{x_1 x_2} + x_1 x_3 + x_2 x_3 + \ldots \\ h_2 &= x_1^2 + x_2^2 + \ldots + x_1 x_2 + x_1 x_3 + \boxed{x_2 x_3} + \ldots \\ \end{aligned} \implies \begin{array}{|c|ccc|}\hline 2 & 2 & 0 & 0 \\2 & 1 & 1 & 0 \\2 & 0 & 1 & 1 \\ \hline & 3 & 2 & 1 \\ \hline\end{array}.

Thus each matrix corresponds to a way of obtaining x^\mu := \prod_i x_i^{\mu_i} by taking terms from h_{\lambda_1}, h_{\lambda_2}, etc. ♦

Example

Suppose we take partitions \lambda = (2, 2), \mu = (2, 1, 1). Then M_{\lambda\mu} = 4 since we have the following matrices:

four_integer_matrices

Exercise

Compute M_{\lambda\mu} for all partitions \lambda, \mu of 4. Calculate the resulting 5 × 5 matrix, by ordering the partitions reverse lexicographically.

blue-lin

Generating Functions

The elementary symmetric polynomials satisfy the following:

e_0 + e_1 t + e_2 t^2 + \ldots + e_n t^n = (1 + x_1 t)(1 + x_2 t) \ldots (1 + x_n t).

Thus their generating function is given by E(t) := \prod_{i=1}^n (1 + x_i t). Next, the generating function for the $h_k$’s is given by:

\begin{aligned}H(t) &:= h_0 + h_1 t + h_2 t^2 + \ldots \\ &= (1 + x_1 t + x_1^2 t^2 + \ldots) (1 + x_2 t + x_2^2 t^2 + \ldots ) \ldots (1 + x_n t + x_n^2 t^2 + \ldots)\\ &= \prod_{i=1}^n \frac 1 {1 - x_n t}.\end{aligned}

From H(t)E(-t) = 1, we obtain the following relation:

e_0 h_k - e_1 h_{k-1} + \ldots + (-1)^k e_k h_0 = 0, \qquad \text{ for } k=1, 2, \ldots.

Note that e_i = 0 for i > n.

From this recurrence relation, we can express each h_k as a polynomial in e_1, \ldots, e_n. E.g.

\begin{aligned} h_1 &= e_1, \\h_2 &= e_1^2 - e_2,\\h_3 &= e_1^3 - 2 e_1 e_2 + e_3,\\&\vdots\end{aligned}

blue-lin

Duality Between e and h

From the symmetry of the recurrence relation, we can swap the h‘s and e‘s and the expressions are still correct, e.g. e_3 = h_1^3 - 2 h_1 h_2 + h_3. As another example, if n=3, we have e_4 = h_1^4 - 3h_1^2 h_2 + 2h_1 h_3 + h_2^2 - h_4 = 0.

Definition. Since \Lambda_n \cong \mathbb{Z}[e_1, \ldots, e_n] is a free commutative ring, we can define a graded ring homomorphism

\omega: \Lambda_n \to \Lambda_n, \qquad e_i \mapsto h_i

for 1\le i \le n.

From what we have seen, the following comes as no surprise.

Proposition. \omega is an involution, i.e. \omega^2 is the identity on \Lambda_n.

Proof.

We will prove by induction on k that w(h_k) = e_k for 0\le k\le n. For k=0 this is obvious; suppose 0<k\le n. Apply \omega to the above recurrence relation; since \omega(e_i) = h_i for 0\le i\le k we have:

h_0 \omega(h_k) - h_1 \omega(h_{k-1}) + \ldots + (-1)^k h_k \omega(h_0) = 0.

By induction hypothesis \omega(h_i) = e_i for i=0, \ldots, k-1; since h_0 = 1 we have

\omega(h_k) = h_1 e_{k-1} - h_2 e_{k-2} + \ldots + (-1)^{k-1} h_k e_0 = h_0 e_k = e_k.

Hence \omega^2(e_k) = e_k for all k; since e_1,\ldots, e_n generate \Lambda_n we are done. ♦

Now suppose |\lambda| = d; write h_\lambda \in \Lambda_n^{(d)} as an integer linear combination of the e_\mu for |\mu| = d, \mu_1 \le n. Applying \omega, this gives e_\lambda in terms of h_\mu for |\mu| = d, \mu_1 \le n. In particular, we get:

Corollary. The following gives a \mathbb{Z}-basis of \Lambda_n^{(d)}:

\{h_\lambda : |\lambda| = d, \lambda_1 \le n\}.

Hence we also have \Lambda_n \cong \mathbb{Z}[h_1, \ldots, h_n] as a free commutative ring; the isomorphism preserves the grading, where \deg(h_i)=i.

Exercise

Consider the matrix \mathbf M = (M_{\lambda\mu}), \mathbf N = (N_{\lambda\mu}), where \lambda, \mu run through all partitions of d. Using the involution \omega, prove that

\left(\mathbf M\mathbf N^{-1}\right)^2 = \mathbf I.

In particular, M is invertible; this is not obvious from its definition.

Exercise

Since h_{n+1}, h_{n+2}, \ldots \in \Lambda_n = \mathbb{Z}[h_1, \ldots, h_n], each h_k can be uniquely expressed as a polynomial in h_1, \ldots, h_n. For n=3, express h_4, h_5 in terms of h_1, h_2, h_3.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Polynomials and Representations II

More About Partitions

Recall that a partition \lambda is a sequence of weakly decreasing non-negative integers, where appending or dropping ending zeros gives us the same partition. A partition is usually represented graphically as a table of boxes or dots:

partition_boxes_and_dots

We will be using the left diagram – this is called the Young diagram of \lambda.

We will also use notations |\lambda| := \sum_i \lambda_i and write l(\lambda) for the largest i for which \lambda_i > 0. If |\lambda| = d, we also say \lambda \vdash d. For example if \lambda = (5, 4, 2), then:

  • |\lambda| = 11;
  • l(\lambda) = 3;
  • \lambda is a partition of 11, or \lambda \vdash 11.

Definition. If \lambda is a partition, its transpose \overline \lambda is the partition obtained by flipping the Young diagram about the main diagonal.

partition_transpose_by_diagram

We will be labeling variables with partitions v_\lambda so it helps to have an ordering on the set of all partitions of d. The lexicographical order (or dictionary order) on the set \{\lambda : \lambda \vdash d\} is given as follows: \lambda < \mu if and only if there is an i such that

\lambda_1 = \mu_1, \ \lambda_2 = \mu_2,\ \ldots,\ \lambda_i = \mu_i,\ \lambda_{i+1} < \mu_{i+1}.

For example, the set of all partitions of 5 is ordered as follows:

(1, 1, 1, 1, 1) < (2, 1, 1, 1) < (2, 2, 1) < (3, 1, 1) < (3, 2) < (4, 1) < (5).

However, algorithmically it is easier to generate the set of partitions in the reverse lexicographical order, so in labeling v_\lambda we will use that instead.

blue-lin

Dominance Ordering of Partitions

Now, the lexicographical ordering is total (i.e. any two partitions can be compared), but the problem is that taking the transpose does not give us any discernible result. E.g. for partitions of 6, we have:

  • (2,2,1,1) < (2,2,2) and taking the transpose gives (4,2) > (3,3).
  • (2,2,2) < (3,1,1,1) but taking the transpose gives (3,3) < (4,1,1).

Thus, we consider the following partial ordering instead.

Definition. Given partitions \lambda, \mu of d, we write \lambda \trianglelefteq \mu if for each i we have

\lambda_1 + \lambda_2 + \ldots + \lambda_i \le \mu_1 + \mu_2 + \ldots + \mu_i.

Although the definition makes sense for any two partitions, in practice we only use them to compare cases where |\lambda| = |\mu|. We have the following.

Proposition. For \lambda, \mu \vdash d, we have \lambda \trianglelefteq \mu if and only if \overline\lambda \trianglerighteq \overline\mu.

Proof.

Suppose \lambda\trianglelefteq \mu. We first claim that, for each k\ge 1, we have:

(\overline\lambda_1 + \overline\lambda_2 + \ldots + \overline \lambda_k) + (\lambda_1 - k) + (\lambda_2 - k) + \ldots + (\lambda_j - k) = |\lambda| = |\mu|

where j = \overline\lambda_k is the largest value for which \lambda_j \ge k; equivalently, j is the largest value for which \lambda_j - k \ge 0. The claim can be readily seen from the diagram below:

partition_transpose_sum

Now (\lambda_1 - k) + \ldots + (\lambda_j - k) \le (\mu_1 - k) + \ldots + (\mu_j - k). Let j' = \overline\mu_k. We have two cases.

  • If j \le j', then this sum is at most (\mu_1 - k) + \ldots +(\mu_{j'} - k) since there are more non-negative terms in the sum.
  • If j > j', the sum is still at most (\mu_1 - k) + \ldots +(\mu_{j'} - k) since the excess terms \mu_{j'+1} - k, \ldots, \mu_j - k are negative.

Hence (\lambda_1 - k) + \ldots + (\lambda_j - k) \le (\mu_1 - k) + \ldots + (\mu_{j'} - k) which gives:

\overline \lambda_1 + \overline \lambda_2 + \ldots + \overline \lambda_k \ge \overline \mu_1+ \overline \mu_2 + \ldots + \overline \mu_k.

So \lambda \trianglelefteq \mu \implies \overline\lambda \trianglerighteq \overline\mu; replacing \lambda,\mu by their transposes we get the reverse implication. ♦

Thus, in our above example, (2,2,1,1) \trianglelefteq (2,2,2) and the transpose gives (4,2) \trianglerighteq (3,3). On the other hand (2,2,2) \not\trianglelefteq (3,1,1,1).

blue-lin

Properties of N_{\lambda\mu}

Previously, we saw that the elementary symmetric polynomial e_\lambda can be expressed as:

\displaystyle e_\lambda = \sum_{\mu \vdash d} N_{\lambda\mu} m_\mu.

In the case where l(\mu) > n, the monomial m_\mu = 0 vanishes. Now, we write the above expression vectorially.

\mathbf e = \mathbf N \mathbf m

Lemma 1. For each partition \lambda, we have N_{\lambda\overline \lambda} = 1.

Proof.

We claim that the unique binary matrix is obtained from the Young diagram by replacing boxes (or dots) with 1’s. E.g. for \lambda = (5, 4, 2) and \overline \lambda = (3, 3, 2, 2, 1), the matrix must be:

\begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 \end{pmatrix}.

Indeed, the first row must be filled with all 1’s since the number of columns is exactly \lambda_1. Removing the first row and dropping trailing zeros in \overline\lambda, the number of remaining columns is exactly \lambda_2 so the second row must be filled with all 1’s. And so on. ♦

Lemma 2. For any partitions \lambda, \mu of d, if N_{\lambda\mu}>0 then \lambda \trianglelefteq \overline\mu.

Proof.

Suppose (a_{ij}) is a binary matrix whose row sums are \lambda_i and column sums are \mu_j. Erase the 0’s and replace the 1’s in the matrix by consecutive 1, 2, …, d, where d = |\lambda| = |\mu|. E.g. if \lambda = (3, 2, 2, 1) and \mu = (2, 2, 2, 1, 1), pick:

\begin{array}{|c|ccccc|} \hline 3 & 1 &1 & 0 & 1 & 0 \\ 2 & 1 & 0 & 1 & 0 & 0\\ 2 & 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ \hline & 2 & 2 & 2 & 1 & 1 \\ \hline\end{array} \implies \begin{array}{|ccccc|}\hline 1 & 2 & & 3 & \\ 4 & & 5 & & \\ & & 6 & & 7 \\ & 8 & & & \\ \hline\end{array}

Left-justify the resulting matrix to obtain a matrix. Similarly, top-justify it to obtain another matrix:

M_S = \left[\begin{matrix} 1 & 2 & 3 \\ 4 & 5 \\6 & 7 \\8 \end{matrix}\right] \qquad M_T = \left[\begin{matrix} 1 & 2 & 5 & 3 & 7\\ 4 & 8 & 6\end{matrix}\right].

Note that M_S and M_T have shapes \lambda and \overline \mu respectively, i.e. there are \lambda_i terms in row i of M_S and \overline \mu_i terms in row i of M_T. By construction, if an element occurs in row i of M_S, then it must occur in row i or above in M_T. Thus, the number of elements in rows 1-k of M_S is at most the number of elements in rows 1-k of M_T and we have:

\lambda_1 + \lambda_2 + \ldots + \lambda_k \le \overline\mu_1 + \overline\mu_2 + \ldots + \overline\mu_k

as desired. ♦

blue-lin

Elementary Symmetric Polynomials as Basis

Now let \mathbf J be the permutation matrix which switches \lambda with its transpose \overline \lambda. The prior two lemmas show that \mathbf J \mathbf N is upper-triangular, with all 1’s on the main diagonal. Hence \mathbf J\mathbf N is invertible over the integers, and so is \mathbf N.

Example: n=3, d=4.

We have:

\small\begin{pmatrix} e_{31} \\ e_{22} \\ e_{211} \\ e_{1111} \end{pmatrix} = \begin{pmatrix}0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 1 & 2 & 5 \\ 1 & 4 & 6 & 12  \end{pmatrix} \begin{pmatrix} m_4 \\ m_{31}\\ m_{22}\\ m_{211}\end{pmatrix}\\ \implies \mathbf N=\begin{pmatrix}0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 2\\ 0 & 1 & 2 & 5 \\ 1 & 4 & 6 & 12 \end{pmatrix}, \mathbf J = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\end{pmatrix}.

Since we know that

\{m_\lambda : |\lambda| = d, l(\lambda) \le n\}

gives a Z-basis of \Lambda_n^{(d)}, we see that

\{ e_\lambda : |\lambda| = d, \lambda_1 \le n\}

gives a Z-basis as well, since J (taking transpose) gives a bijection between partitions

\{\lambda: |\lambda| = d, l(\lambda) \le n\} \leftrightarrow \{ \lambda: |\lambda|=d, \lambda_1 \le n\}.

As a result:

Corollary. We have \Lambda_n \cong \mathbb{Z}[e_1, \ldots, e_n], where the RHS is a freely generated commutative ring. The isomorphism preserves the grading, where deg(e_i) = i.

Exercise

Work out the matrices J and N for the cases of (nd) = (4, 4), (3, 5), (4, 5), (5, 5).

Posted in Uncategorized | Tagged , , , , | Leave a comment

Polynomials and Representations I

We have already seen symmetric polynomials and some of their applications in an earlier article. Let us delve into this a little more deeply. Consider the ring \mathbb{Z}[x_1, \ldots, x_n] of integer polynomials. The symmetric group S_n acts on it by permuting the variables; specifically, \sigma \in S_n takes:

f(x_1, \ldots, x_n) \mapsto f(x_{\sigma(1)}, \ldots, x_{\sigma(n)}).

For example, \sigma = {\small\begin{pmatrix} 1 & 2 & 3 \\ 2 & 3 & 1\end{pmatrix}} takes x_1 + 2x_2 + 3x_3 to x_2 + 2x_3 + 3x_1. Denote the ring of symmetric polynomials by:

\Lambda_n:= \mathbb{Z}[x_1, \ldots, x_n]^{S_n}.

This is a homogeneous ring, written as a direct sum:

\Lambda_n = \Lambda_n^{(0)} \oplus \Lambda_n^{(1)} \oplus \ldots,

where \Lambda_n^{(d)} is the additive group of homogeneous polynomials of degree d. For example, when n=3, we have (upon renaming the variables to xyz):

\begin{aligned} x+y+z &\in \Lambda_3^{(1)},\\ xy + yz + zx &\in \Lambda_3^{(2)},\\x^2 (y+z) + y^2(z+x) + z^2(x+y) &\in \Lambda_3^{(3)}.\end{aligned}

Let us fix n, the number of variables.

blue-lin

Monomial Basis

Clearly, each \Lambda_n^{(d)} is a free additive abelian group. For example, for \Lambda_3^{(3)}, a basis is given by:

x^3 + y^3+ z^3, \ x^2(y+z) + y^2(x+z) + z^2(x+y), \ xyz.

More generally, recall that a partition \lambda of d is a sequence of non-negative integers:

\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_l), \qquad \lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_l \ge 0, \qquad \sum_i \lambda_i = d.

Two partitions of d are considered the same if we can obtain one from the other by appending or dropping zeros. E.g. (3, 2, 1, 0) and (3, 2, 1) are identical partitions of 6.

Definition. Given a partition \lambda of d, let m_\lambda be the symmetric polynomial obtained by summing all terms of the form:

x_{i_1}^{\alpha_1} x_{i_2}^{\alpha_2}\ldots x_{i_l}^{\alpha_l}, \qquad {\small\begin{aligned} & 1\le i_1 < \ldots < i_l \le n, \\ & \alpha_1, \alpha_2, \ldots, \alpha_l \text{ is a permutation of } \lambda_1, \lambda_2, \ldots, \lambda_l.\end{aligned}}

For example when n=3, we have m_{(3)} = x^3 + y^3 + z^3, and m_{(2,1)} = x^2(y+z) + y^2(x+z) + z^2(x+y).

The following result is clear.

Theorem. The additive group \Lambda_n^{(d)} is free with basis given by m_\lambda, for all partitions \lambda of d into at most n parts.

Let us consider an example where d=4, n=3. A basis of \Lambda_3^{(4)} is given by:

\begin{aligned} m_{4} &= x^4 + y^4 + z^4,\\ m_{31} &= x^3 (y+z) + y^3 (x+z) + z^3(x+y),\\ m_{22} &= x^2 y^2 + y^2 z^2 + z^2 x^2,\\ m_{211} &= x^2 yz + xy^2 z + xyz^2. \end{aligned}

Note that for brevity of notation, we write m_{211} instead of m_{(2,1,1)}. However, if the subscripts involve variables, we will include the commas for clarity.

blue-lin

Elementary Symmetric Polynomials

Recall that the elementary symmetric polynomials in xyz are given by x+y+zxy+yz+zx and xyz. Generalizing this, we define:

e_0 = 1, \qquad e_k = \sum_{1 \le i_1 < i_2 < \ldots< i_k \le n} x_{i_1} x_{i_2} \ldots x_{i_k}\text{ for } 1 \le k \le n.

Furthermore, given a partition \lambda of d, we define:

e_\lambda := e_{\lambda_1} e_{\lambda_2} \ldots e_{\lambda_l},

assuming \lambda_{l+1} = \lambda_{l+2} = \ldots = 0. Note that since e_0 = 1, we can take as many terms as we want by increasing l without affecting e_\lambda. For example, when n=3 we have:

e_{3221} = xyz(xy + yz + zx)^2 (x+y+z).

Now since e_\lambda \in \Lambda_n^{(d)} (here, d=|\lambda| where |\lambda| denotes the sum of all \lambda_i), it is natural to ask the following question.

Task. Express the symmetric polynomial e_\lambda as a \mathbb{Z}-linear combination of the monomial polynomials m_\mu for various |\mu| = d.

In our above example, expanding e_{3221} with WolframAlpha gives us: m_{431} + 2m_{422} + 5m_{332}. In the remaining of this article, we will describe a more systematic method of computation.

blue-lin

Expanding the e‘s

Here is our main result.

Theorem. Suppose |\lambda| =d. Consider the expression

e_\lambda = \sum_{\mu} N_{\lambda\mu} m_\mu,

where we sum over all partitions \mu with |\mu| = d. The coefficient N_{\lambda\mu} is given by the number of matrices (a_{ij}) with entries either 0 or 1 such that

\sum_i a_{ij} = \lambda_j \text{ for each } j, \qquad \sum_j a_{ij} = \mu_i \text{ for each } i.

Here is a sample computation: take the polynomial e_{3221} above and find the coefficient for the monomial m_{332}. For that, we need to find the number of binary matrices with column sums \lambda = (3,2,2,1) and row sum \mu = (3,3,2). As expected, we get 5 solutions:

five_binary_matrices

Proof of Theorem.

Using the above example with n=3, we expand e_\lambda = e_{\lambda_1} e_{\lambda_2} \ldots e_{\lambda_l} and attempt to find the coefficient of x^\mu = x_1^{\mu_1} x_2^{\mu_2} \ldots x_l^{\mu_l}. E.g. suppose \lambda = (3, 2, 2, 1) and \mu = (3, 3, 2). We wish to expand:

e_3 e_2 e_2 e_1 = x_1 x_2 x_3(x_1 x_2 + x_1 x_3 + x_2 x_3)(x_1 x_2 + x_1 x_3 + x_2 x_3) (x_1 + x_2 + x_3).

Here is one way we can multiply terms to obtain x^\mu = x_1^3 x_2^3 x_3^2.

\begin{aligned} e_3 = &\boxed{x_1 x_2 x_3}, \\ e_2 = x_1 x_2 + &\boxed{x_1 x_3} + x_2 x_3, \\ e_2 = &\boxed{x_1 x_2} + x_1 x_3 + x_2 x_3,\\ e_1 = x_1 + &\boxed{x_2} + x_3. \end{aligned}\implies \begin{array}{|c|ccc|} \hline 3 & 1 & 1 & 1 \\ 2 & 1 & 0 & 1 \\ 2 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 \\ \hline & 3 & 3 & 2\\ \hline \end{array}

Thus each binary matrix corresponds to a way of obtaining x^\mu by multiplying terms from e_{\lambda_1}, e_{\lambda_2}, etc. ♦

warningIn the definition of N_{\lambda\mu}, the variable n is ostensibly missing. However note that if \mu_{n+1} > 0, then the monomial m_{\mu}=0 since it would involve more than n variables. For example, if n=2 and \lambda = (2,1), then the above expansion gives e_{21} = m_{21} + 3m_{111} = m_{21} since m_{111} = 0. One can verify this directly by expanding e_{21} = xy(x+y).

On the other hand, if n=3, we now have e_{21} = (xy + yz + zx)(x+y+z) = m_{21} + 3m_{111} and now m_{111} \ne 0.

blue-lin

Exercise

1. Express (w + x + y + z)^4 as a linear combination of the monomial symmetric polynomials:

m_4, \quad m_{31},\quad m_{22}, \quad m_{211}, \quad m_{1111}.

Check your computations with WolframAlpha.

2. Write a program in Python (or any language) which computes N_{\lambda\mu} for any partitions \lambda, \mu.

Posted in Uncategorized | Tagged , , , , | 4 Comments

Modular Representation Theory (IV)

Continuing our discussion of modular representation theory, we will now discuss block theory. Previously, we saw that in any ring R, there is at most one way to write 1 = e_1 + \ldots + e_r where e_i \in Z(R) is a set of orthogonal and centrally primitive idempotents. If such an expression exists, the e_i are called block idempotents of R. For example, block idempotents exist when R is artinian.

We need the following refinement:

Lemma. Let 1 = e_1 + \ldots + e_r be block idempotents of R. Suppose 1 = f_1 + \ldots + f_s where the f_j \in Z(R) are orthogonal central idempotents. Then there exists a unique map \phi: \{1, \ldots, r\} \to \{1, \ldots, s\} such that:

f_j = \sum_{i \in \phi^{-1}(j)} e_i.

Note: if \phi^{-1}(j) = \emptyset the sum is zero

Thus after a suitable renumbering of terms, we have:

1 = \overbrace{e_1 + \ldots + e_{i_1}}^{f_1} +\overbrace{e_{i_1+1} + \ldots + e_{i_2}}^{f_2}+\ldots + \overbrace{e_{i_{s-1}+1} + \ldots + e_{i_s}}^{f_s}.

Proof

For each e_i we have e_i = e_i f_1 + \ldots + e_i f_s, where \{e_i f_1, \ldots, e_i f_s\} are orthogonal central idempotents. Since e_i is centrally primitive, we must have e_i f_j = e_i for some unique j and e_i f_k =0 for all kj. The map i\mapsto j then gives e_i f_{\phi(i)} = e_i and e_i f_k \ne 0 for all k\ne \phi(i). And so

f_j = e_1 f_j + \ldots + e_r f_j = \sum_{i\in \phi^{-1}(j)} e_i. ♦

Decomposition of R-Modules

Let M be an R-module and suppose 1 = e_1 + \ldots +e_r where the e_i are block idempotents of R. Then M = \oplus_{i=1}^r e_i M.

  • Indeed, M = \sum_i e_i M since each m = \sum_i e_i m \in \sum_i e_i M.
  • On the other hand, if x \in e_1 M \cap (e_2 M +\ldots + e_r M) then we have x = e_1 m_1 = e_2 m_2 + \ldots + e_r m_r and thus e_1 m_1 = e_1^2 m_1 = e_1 e_2 m_2 + \ldots + e_1 e_r m_r = 0.

Furthermore, since e_i commutes with every rR, e_i M \subseteq M is in fact an R-submodule. The central idempotent e_i acts as the identity on e_i M and zero on e_j M for ji. One can thus imagine:

R \cong R_1 \times R_2 \times \ldots \times R_n, \qquad M \cong M_1 \times M_2 \times \ldots \times M_n

where each M_i is an R_i-module.

Block Idempotents of Semisimple R

Recall that a semisimple ring R is isomorphic to \prod_{i=1}^m M_{n_i}(D_i) for some division ring D_i, where M_n(D) is the n × n matrix ring with entries in D. Since each matrix ring is a simple ring, we immediately obtain the central idempotents: e_i for i=1,…,m, corresponds to the element whose component in M_{n_i}(D_i) is the identity matrix, and whose component in M_{n_j}(D_j), j\ne i is the zero matrix.

As a module over itself, R is a direct sum of the spaces of column vectors, so R = \oplus_{i=1}^m S_i^{n_i} where S_1, \ldots, S_r runs through a complete collection of simple R-modules, and the component T_i := S_i^{n_i} gives the maximal decomposition R = \oplus_i T_i as a direct sum of ideals.

In particular, this holds for the group ring K[G]. We have:

K[G] = \oplus V^{\dim_K V}, where the direct sum is over all simple V.

Lemma. The block idempotent for V^{\dim_K V} is given by the following formula:

e_V = \frac{\dim V}{|G|} \sum_{g\in G} \chi_V(g^{-1})g,

where \chi_V(g) := \text{tr}(g: V\to V) is the character of V.

Proof

Fix V; it suffices to show that e_V induces the identity map on V^{\dim V} and zero on all other components. Now the coefficients of e_V are constant over each conjugancy class, so e_V\in Z(K[G]) and e_V is K[G]-linear. If W is simple, e_V induces a scalar map on it, say \lambda_W\cdot \text{id}_W. To compute \lambda_W we take the trace:

\lambda_W \cdot \dim_K W = \text{tr}(e_V : W\to W) = \frac{\dim V}{|G|} \sum_{g\in G} \chi_V(g^{-1}) \text{tr}(g: W\to W).

But \text{tr}(g:W\to W) = \chi_W(g) and so the above sum is \dim V\cdot \left< \chi_V, \chi_W\right> = \dim V\cdot \delta_{V,W}. Thus \lambda_W = \delta_{V,W} as desired. ♦

blue-lin

Block Idempotents of R[G]

Since k[G] is artinian, block idempotents are guaranteed to exist. Furthermore these can be lifted to block idempotents of R[G], which is a nice result since R[G] is not artinian.

Lemma. Suppose e_1 + \ldots + e_n = 1 are orthogonal central idempotents of Z(k[G]). Then we can find orthogonal central idempotents \hat{e_1} + \ldots + \hat{e_n} = 1 of R[G] such that \hat{e_i} \pmod \pi = e_i.

Proof

The proof is conceptually similar to an earlier lemma. The main step is to show:

Claim: if S is a commutative ring with ideal I such that I^2 = 0, then any orthogonal idempotents e_i\in S/I summing to 1 can be lifted to orthogonal idempotents f_i\in S summing to 1.

[ If we can show this, then idempotents e_i \in Z((R/\pi)[G]) can be lifted to Z((R/\pi^2)[G], and in turn to Z((R/\pi^4)[G] etc. Since R is complete, this gives idempotents in Z(R[G]). ]

Proof of Claim.

Pick any x_i \in S such that x_i \pmod I = e_i. Thus:

\sum x_i \equiv 1 \pmod I, \quad x_i x_j \equiv 0 \pmod I \text{ for } i\ne j, \quad x_i^2 \equiv x_i \pmod I.

As in the earlier proof, let y_i = 3x_i^2 - 2x_i^3 and this gives y_i^2 = y_i. Also y_i y_j \ne 0 for all ij since it is divisible by (x_i x_j)^2 (since S is commutative). Finally, we claim that \sum y_i = 1. Indeed, from the factorisation 3T^2 - 2T^3 - 1 = -(T-1)^2(2T+1) we obtain:

\sum_i y_i - 1 = 3\sum_i y_i^2 - 2\sum_i y_i^3 - 1 = -(\sum_i y_i - 1)^2 (2\sum_i y_i + 1)

where the first equality follows from y_i^3=y_i^2 = y_i and the second follows from y_i y_j = 0 for ij. Note that y_i \equiv x_i \pmod I so (\sum y_i - 1)\in I. Thus \sum y_i - 1 is zero. ♦

Conversely, if \hat{e_i}\in R[G] are orthogonal central idempotents summing to 1, then so are their images e_i \in k[G]. Finally, we have:

Lemma. If \hat{e}, \hat{e}'\in R[G] are central idempotents with the same image in k[G], then they are equal.

Proof.

Let \hat f:=\hat{e} - \hat{e}', which is a central idempotent in πR[G]. But \hat f \in \pi^m R[G] \implies \hat f = \hat{f}^2\in \pi^{2m} R[G] so we must have \hat f = 0 and so \hat{e} = \hat{e}'. ♦

Thus, we have shown:

Summary.

There is a 1-1 correspondence between:

  • orthogonal central idempotents of k[G] summing to 1, and
  • orthogonal central idempotents of R[G] summing to 1.

In particular, block idempotents for k[G] lift to those for R[G]:

1 = e_1 + e_2 + \ldots + e_r,\ (e_i \in k[G]) \ \mapsto 1 = \hat{e_1} + \hat{e_2} + \ldots + \hat{e_r},\ (\hat{e_i} \in R[G]).

Taking each \hat{e_i} \in K[G], we can write it as a sum of the block idempotents of K[G]. Thus, we can partition the set of simple K[G]-modules as a disjoint union \cup_i B_i, one for each \hat{e_i}, such that:

\hat{e_i} = \sum_{V\in B_i} e_V.

For convenience, we also denote e_V by e_\chi where χ is the character of V. This gives the formula e_\chi = \frac{\chi(1)}{|G|} \sum_{g\in G} \chi(g^{-1})g.

Example: S3.

Let’s compute the central idempotents for K[S3] using the above formula. Let a = (1,2) + (2,3) + (3,1) and b = (1,2,3) + (1,3,2). We recover the example at the end of the previous article:

chartable_idempotent_s3

Now let’s consider the case p=2. We have 1 = (e_1 + e_2) + e_3 = \frac 1 3(1 + b) + \frac 2 3 (2-b) as block idempotents of R[G] (and also of k[G], after reduction mod 2), so the blocks are {e1, e2}, {e3}. For p=3, {e1e2, e3} all belong to the same block.

blue-lin

If M is an indecomposable k[G]-module, then M = \oplus_i e_i M and thus there is exactly one i for which M = e_i M, and e_j M = 0 for all ji. As a result, the basis elements [P] \in P_k(G) and [M] \in R_k(G) can be classified into blocks, where P (resp. M) belongs to block e_i if and only if e_i P = P (resp. e_i M = M). Note that if P belongs to block ei, then the idempotent ei acts as the identity on eiP. The same holds for M.

Similarly, for a simple K[G]-module V, there is a unique i for which \hat{e_i}V = V. In summary, we can think of a block as a collection of:

  • indecomposable finitely-generated projective k[G]-modules P;
  • simple k[G]-modules M;
  • simple K[G]-modules V.

Lemma. Suppose basis elements [P]\in P_k(G), [V]\in R_K(G) belong to distinct blocks ei and ej, where i≠j. Then the matrix entry of e: P_k(G) \to R_K(G) corresponding to [P], [V] is zero.

Proof

Indeed, we have e_i P = P, e_j P = 0 and \hat{e_i} V = 0, \hat{e_j}V = V. If [\hat P] \in P_R(G) is the lift of [P], we have \hat{e_i} \hat P \ne 0 and thus \hat{e_i} \hat P = \hat P since \hat P is an indecomposable R[G]-module. Hence \hat{e_i}(K \otimes_R \hat P) = K\otimes_R \hat P and we must have \hat{e_i} W = W for any irreducible component W of K\otimes_R \hat P. This shows that V cannot be a component of K\otimes_R \hat P. ♦

Corollary. Let [P] \in P_k(G), [M] \in R_k(G), [V]\in R_K(G) be basis elements.

  • If [M] and [V] belong to different blocks, the matrix entry of d:R_K(G) \to R_k(G) is zero.
  • If [P] and [V] belong to different blocks, the matrix entry of c:P_k(G) \to R_k(G) is zero.

Proof

The first statement follows from the fact that the matrix for d is the transpose of that for e; the second statement follows from cde. ♦

Thus, the matrices for c:P_k(G) \to R_k(G), d:R_K(G) \to R_k(G), e:P_k(G) \to R_K(G) can be broken up as block matrices, one for each block idempotent of k[G].

Example: S4.

The character table of S4 gives us: letting a = (sum of 2-cycles), b = (sum of 3-cycles), c = (sum of 4-cycles), d = (sum of (2+2)-cycles), the central idempotents of K[S4] are:

\begin{aligned} e_{\text{triv}} = \frac 1 {24} (1 + a + b + c + d),&\quad e_{\text{alt}} = \frac 1 {24} (1 - a + b - c + d)\\ e_2 = \frac 1 {12}(2 - b + 2d),&\quad e_1 = \frac 1 8 (3 + a - c - d)\\ e_{1,\text{alt}} = \frac 1 8 (3 - a + c -d).&\end{aligned}

For p = 2, all five simple characters belong to the same block. For p = 3, we have:

1 = \overbrace{e_{\text{triv}} + e_{\text{alt}} + e_2}^{\text{Block 1}} + \overbrace{e_1}^{\text{Block 2}} + \overbrace{e_{1, \text{alt}}}^{\text{Block 3}}.

chartable_block_s4

blue-lin

Finally, we can check when two characters lie within the same block. First, we need the following lemma:

Lemma. Let R be a commutative k-algebra of finite dimension over k and 1 = e_1 + \ldots + e_r be its block idempotents. Then any k-algebra homomorphism \phi : R\to k is uniquely determined by the image of the block idempotents \phi(e_1), \ldots, \phi(e_r).

Proof

Corresponding to the block idempotents, write R = R_1\times \ldots \times R_r as a product of commutative k-algebras. Since R_i is artinian, R_i/J(R_i) is semisimple and hence a product of matrix algebras. Since R_i has no idempotent except 0 and 1, R_i/J(R_i) itself is a matrix algebra, i.e. M_n(D) for some division ring D/k. But R is commutative, so n=1 and D is a field extension k’ of k.

On the other hand, let \phi_i := \phi|_{R_i} : R_i \to k. Since \phi(e_1 + \ldots + e_r) = 1 and the \phi(e_i) are orthogonal idempotents, exactly one \phi_i is a k-algebra homomorphism while the remaining \phi_j are zero maps for ji. Now \phi_j : R_j \to k must factor through the nilpotent ideal J(R_j) and so it is determined by \phi_j' : k' \to k which is either the zero map or the identity, depending on whether \phi(e_j) is 0 or 1. ♦

Theorem. Let V, W be simple K[G]-modules; the following are equivalent.

  • V and W are in the same block.
  • For any conjugancy class C⊆G and g∈C, we have:

\frac{|C| \chi_V(g)}{\dim_K V} \equiv \frac{|C| \chi_W(g)}{\dim_K W} \pmod \pi.

Note

In the course of the proof, we will see that \frac{|C| \chi_V(g)}{\dim_K V}\in R for any simple V, so the congruence is well-defined.

Proof

Step 1. For each simple V, we will define a ring homomorphism \lambda_V : Z(K[G]) \to K.

Given any conjugancy class CG, define \alpha_C := \sum_{g\in G} g as a K-linear map VV. Since \alpha_C commutes with all gG, it is a K[G]-linear map VV. But V is simple, hence such a map is a scalar; thus we get a ring homomorphism \lambda_V : Z(K[G]) \to K.

Step 2. Show that \lambda_V(\alpha_C) is the LHS of the congruence and that this lies in R.

Taking the trace of \alpha_C : V\to V, we get:

\lambda_V(\alpha_C) \dim V = \sum_{g\in C} \text{tr}(g: V\to V) = |C| \chi_V(g).

So \lambda_V(\alpha_C) = \frac{|C| \chi_V(g_C)}{\dim V} for any representative g_C \in C. To prove that this lies in R, recall that we can pick an R-lattice MV which is an R[G]-module. Thus \alpha_C (M) \subseteq M. Picking a basis for M, we have \det(\alpha_C) \in R \implies \lambda_V(\alpha_C) \in R.

Step 3. Compute \lambda_V(e_W) where e_W is the K[G] block idempotent for W.

Let us rewrite

e_W = \frac{\dim W}{|G|} \sum_{g\in G} \chi_W(g^{-1})g = \frac{\dim W}{|G|} \sum_{\text{conj. cl. }C} \chi_W(g_C^{-1}) \alpha_C

and so \lambda_V(e_W) = \frac{\dim W}{\dim V\cdot |G|} \sum_C |C| \chi_V(g_C)\chi_W(g_C^{-1}) = \frac{\dim W}{\dim V} \left< \chi_V, \chi_W\right> = \delta_{V,W}.

Step 4. Complete the proof.

Since \hat{e_i}= \sum_{V\in B_i} e_V, step 3 tells us V and W belong to the same block B_i if and only if \lambda_V(\hat{e_i}) = \lambda_W(\hat{e_i}) for all i. But this value is either 0 or 1, so it holds if and only if

\lambda_V(\hat{e_i}) \equiv \lambda_W(\hat{e_i}) \pmod \pi for all i.

On the other hand, \lambda_V(Z(R[G])) \subseteq R so we also obtain a ring homomorphism \lambda_V : Z(R[G]) \to R. Reduction mod π then gives us a k-algebra homomorphism \lambda_V' : Z(k[G]) \to k. By the above, V and W belong to the same block if and only if \lambda_V'(e_i) = \lambda_W'(e_i) \in k for all i. Since Z(k[G]) is a commutative k-algebra of finite dimension over k, the above lemma says this holds if and only if \lambda_V' = \lambda_W' : Z(k[G]) \to k which is equivalent to \lambda_V(\alpha_C) \equiv \lambda_W(\alpha_C) \pmod \pi for all C. ♦

Example: S5.

Modular 2, the blocks of the character table are labeled by the circles on the right:

block_s5_mod_2

The corresponding matrices are:

block_s5_mod_2_matrices

Modular 3, the table becomes:

block_s5_mod_3

The corresponding matrices are:

block_s5_mod_3_matrices

Posted in Notes | Tagged , , , , , | Leave a comment

Idempotents and Decomposition

Let R be a general ring, not necessarily commutative. An element xR is said to be idempotent if x2x.

Note

An endomorphism f of an R-module M (i.e. f\in \text{End}_R M) is an idempotent if and only if f is a projection, i.e. M = ker(f) ⊕ im(f) and fMM projects onto im(f). Indeed ⇐ is obvious, and conversely if f is idempotent, we have:

  • Every mM is just f(m) + (m – f(m)). The first term is in im(f); the second term lies in ker(f) since f(m – f(m)) = f(m) – f2(m) = f(m) – f(m) = 0. So M = ker(f) + im(f).
  • Any element of ker(f) ∩ im(f) can be written as f(m) such that f(f(m)) = 0. But this means f(m) = f2(m) = 0, so ker(f) ∩ im(f) = 0.

Throughout this article, we shall focus on idempotents which commute, i.e. effe. A set of idempotents {ei} is said to be orthogonal if eiej = 0 for all ij. The following are easy to prove.

  1. The sum of two orthogonal idempotents is also an idempotent.
  2. If e is any idempotent, then e and 1-e are orthogonal idempotents.

The key result is the following.

Theorem. Let R be any ring. There is a 1-1 correspondence between the following:

  • a decomposition R = I_1 \oplus \ldots \oplus I_n as a direct sum of left ideals, and
  • orthogonal idempotents e_1, \ldots, e_n such that e_1 + \ldots e_n = 1.

Proof

The correspondence is given as follows: for any R = \oplus I_i, write 1 = \sum_i x_i with x_i \in I_i. Then each xi is idempotent since x_i = x_i\cdot 1 = \sum_{j=1}^n x_i x_j and each x_i x_j \in I_j since Ij is a left module. Since R is the direct sum of Ij‘s we have x_i x_j = 0 for all ji and x_i = x_i^2 for all i. Hence the \{x_i\} are orthogonal idempotents.

Conversely given {ei}, let us define Ii := Rei which is a left ideal. Note that R = \sum_i I_i since 1 lies in the RHS. On the other hand, an element of Re_1 \cap (Re_2 + \ldots + Re_n) can be written as r_1 e_1 = r_2 e_2 + \ldots + r_n e_n. Then r_1 e_1 = r_1 e_1^2 = r_2 e_2 e_1 + \ldots + r_n e_n e_1 = 0 since the ei‘s are orthogonal. Similarly, Re_i \cap (Re_1 + \ldots + \hat{Re_i} + \ldots + Re_n) = 0 for all i and thus R = I_1 \oplus \ldots \oplus I_n.

It remains to show that the two constructions are mutually inverse.

  • Start with R = \oplus I_i and write 1 = \sum_i e_i with e_i \in I_i. We need to show I_i = Re_i.
  • Since e_i \in I_i we have I_i \supseteq Re_i.
  • Conversely, if x\in I_i write x = x\cdot 1 = \sum_j xe_j where we have xe_j \in Re_j \subseteq I_j from what we just proved. Since x\in I_i we see that xe_j = 0 for all ji and so x = xe_i \in Re_i.

Finally, start with orthogonal ei‘s summing to 1 and let I_i := Re_i. Clearly 1=\sum_i e_i with e_i\in I_i and this is the unique representation. ♦

Note that under the above correspondence,

  • Ii = 0 if and only if ei =0;
  • any idempotent eR gives orthogonal {e, 1-e}, and so RRe ⊕ R(1-e).

blue-lin

Indecomposable Left Ideals

Now suppose R is an artinian ring (and hence noetherian by the Hopkins–Levitzki theorem). The Krull-Schmidt theorem says that R is a direct sum of indecomposable projective modules Ii. Such modules correspond to primitive idempotents.

Definition. A non-zero idempotent e is said to be primitive if it cannot be written as a sum of non-zero orthogonal idempotents e = f_1 + f_2.

Proposition. If e is an idempotent, then Re is indecomposable if and only if e is primitive.

Proof

Indeed, if e = f_1 + f_2 then Re = Rf_1 \oplus Rf_2 is the direct sum of two non-zero left modules.

  • First, Rf1 ∩ Rf2 = 0: indeed, if xf_1 = yf_2 then xf_1 = xf_1^2 = yf_2 f_1 = 0 so we do have a direct sum on the RHS.
  • Clearly R(f_1 + f_2) \subseteq Rf_1 \oplus Rf_2.
  • Finally for xf_1 + yf_2 in the RHS, we have xf_1 + yf_2 = (xf_1 + yf_2)(f_1 + f_2).

Conversely, if ReI ⊕ J, express ex+y and we have a sum of two orthogonal idempotents (proof left to the reader). ♦

warningIn writing R as a direct sum of indecomposable projective modules, the terms are unique up to isomorphism and permutation, but this does not mean the corresponding idempotents are unique. Specifically, we can have R \cong \oplus I_i' where I_i' \cong I_i as R-modules but they are distinct left ideals of R.

Examples

Suppose R is the ring of 2×2 upper-triangular matrices with real entries. We have the following decomposition:

R = \left\{\begin{pmatrix} * & * \\ 0 & *\end{pmatrix}\right\} = \left\{ \begin{pmatrix} * & 0 \\ 0 & 0 \end{pmatrix}\right\} \oplus \left\{ \begin{pmatrix} 0 & * \\ 0 & * \end{pmatrix}\right\} = \left \{\begin{pmatrix} a & a \\ 0 & 0 \end{pmatrix} \right\} \oplus \left\{ \begin{pmatrix} 0 & * \\ 0 & * \end{pmatrix} \right\}.

Correspondingly, we have the (orthogonal) idempotents:

I = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 0 \end{pmatrix} + \begin{pmatrix} 0 & -1 \\ 0 & 1\end{pmatrix}.

For an example in group algebras, let K be a field whose characteristic is not 2 or 3. Then K[S_3] is semisimple and we have the following orthogonal idempotents:

\begin{aligned} 1 = &\small\frac 1 6 (1 + (1,2) + (1,3) + (2,3) + (1,2,3) + (1,3,2)) + \\ &\frac 1 6 (1 - (1,2) - (1,3) - (2,3) + (1,2,3) + (1,3,2)) + \\ &\frac 1 3 (1 + (1,2) - (1,3) - (1,3,2)) + \\ &\frac 1 3 (1 - (1,2) + (1,3) - (1,2,3)). \end{aligned}

blue-lin

Central Idempotents

Recall that the centre of a ring, denoted Z(R), is the set of all xR, such that xyyx for all yR. This is a commutative subring of R. The centre of the group algebra is easy to describe.

Lemma. If R is commutative, then Z(R[G]) is the free R-module with basis:

e_C :=(\sum_{g\in C} g) \in R[G] where C ⊆ G is a conjugancy class.

For example if GS3, then Z(R[G]) is the free R-module with basis e, (1,2)+(2,3)+(1,2) and (1,2,3)+(1,3,2).

Proof

Let \alpha = \sum_{x\in G} c_x x where c_x \in R; this lies in the centre iff gα = αg for all gG. Multiplying gives

g\alpha = \sum_{x\in G} c_x gx, \quad \alpha g = \sum_{x\in G} c_x xg.

It follows that α lies in Z(R[G]) iff for all gx in Gc_{g^{-1}x} = c_{xg^{-1}} or equivalently for all gx, c_{g^{-1}xg} = c_x. ♦

In particular, this means if R is a complete discrete valuation ring with uniformizer π, residue field k and field of fractions K, then Z(k[G]) = Z(R[G])/πZ(R[G]) and Z(K[G]) = K\otimes_R Z(R[G]).

Next, we look at idempotents.

Definition. An idempotent e of ring R is said to be central if it lies in Z(R).

Correspondingly we have the following

Theorem. There is a bijection between:

  • an isomorphism R \cong R_1 \times \ldots \times R_n as a product of rings;
  • a decomposition R = I_1 \oplus \ldots \oplus I_n as a direct sum of (two-sided) ideals;
  • an expression 1 =e_1 + \ldots + e_n as a sum of orthogonal central idempotents.

Proof

First we prove the correspondence between the second and third sets.

By the earlier correspondence R = \oplus I_i gives us 1 = \sum_i e_i where e_i \in I_i is a collection of n orthogonal idempotents. Let us show that e_i commutes with all x in R: indeed, xe_i, e_i x\in I_i since I_i is a two-sided ideal. Now x = x\cdot 1 = \sum_i xe_i and x = 1\cdot x = \sum_i e_i x and since R = \oplus I_i is a direct sum, matching components gives us xe_i = e_i x.

Conversely, suppose 1 = \sum_i e_i where \{e_i\} are orthogonal central idempotents. The prior correspondence gives I_i := Re_i, which is a two-sided ideal since e_i commutes with everything.

The correspondence between the first and second collections is left as an exercise. ♦

As before, let us consider the case where the decomposition is maximal.

Definition. Let e be a non-zero central idempotent (in Z(R)). We say e is centrally primitive if we cannot write e as a sum of two non-zero orthogonal central idempotents.

Note

If R is artinian, then we can write R = \oplus I_i as a finite direct sum of ideals, where each I_i cannot be decomposed further as a direct sum of two non-zero ideals. This corresponds to writing 1 = \sum_i e_i as a sum of orthogonal central idempotents which are centrally primitive.

warningA central idempotent e can be centrally primitive without being primitive, i.e. e can be written as a sum of two non-zero orthogonal idempotents, but neither of these is central. We will see an explicit example later.

Unlike the case of general idempotents, we have:

Proposition. For any ring R, if

1 = e_1 + \ldots + e_r = f_1 + \ldots + f_s

where each of \{e_1, \ldots, e_r\} and \{f_1, \ldots, f_s\} is a set of centrally primitive central idempotents which are orthogonal, then r=s and there is a permutation σ of {1,…,r} such that e_i = f_{\sigma(i)} for all i.

Proof

First note that e_1, \ldots, e_r are distinct: indeed if e is orthogonal to itself, then 0 = e\cdot e = e. Same goes for f_1, \ldots, f_s.

Next, consider f_j = 1\cdot f_j = \sum_i e_i f_j. Since f_j is centrally primitive and e_i f_1, e_i f_2, \ldots, e_i f_s are orthogonal central idempotent:

(e_i f_j) \cdot (e_i f_k) = e_i^2 f_j f_k = e_i f_j f_k = \begin{cases} e_i f_j, &\text{ if } j=k,\\ 0, &\text{ otherwise,}\end{cases}

it follows that we must have f_j = e_i f_j for some unique i and all remaining terms are zero. Likewise, for this i, we have e_i = e_i f_k for some unique k. So we have f_j = e_i f_j = e_i f_j f_k and since f_j \ne 0, we must have j=k and so e_i = f_j. Since e_1, \ldots, e_r are distinct, as are f_1, \dots, f_s, the result follows. ♦

Example

Let us find all central idempotents of \mathbf{Q}[S_3]. Note that its centre is spanned by 1, a := (1,2)+(1,3)+(2,3) and b := (1,2,3)+(1,3,2). These satisfy

a^2 = 3 + 3b, \quad b^2 = b + 2, \quad ab = ba = 2a.

Now we can write:

1 = \frac 1 3 (2-b) + \frac 1 6 (1-a+b) + \frac 1 6 (1+a+b)

which is a sum of orthogonal central idempotents, which gives an isomorphism Z(\mathbf{Q}[S_3]) \cong \mathbf{Q} \times \mathbf{Q} \times \mathbf{Q} of rings. Since this decomposition is clearly maximal, the above three idempotents are all centrally primitive. Note that the first term is centrally primitive but not primitive since

\frac 1 3(2-b) = \frac 1 3 (1 + (1,2) - (1,3) - (1,3,2)) + \frac 1 3 (1 - (1,2) + (1,3) - (1,2,3)).

Posted in Notes | Tagged , , , , | Leave a comment

Modular Representation Theory (III)

Let’s work out some explicit examples of modular characters. First, we have a summary of the main results.

  • Let \varphi_1', \varphi_2', \ldots, \varphi_r' be the modular characters of the simple k[G]-modules; they form a basis of R_k(G).
  • Let \varphi_1'', \varphi_2'', \ldots, \varphi_r'' be those of the projective indecomposable k[G]-modules; they form a basis of P_k(G).
  • We have r = \dim R_k(G) = \dim P_k(G), the number of p-regular conjugancy classes of G.
  • The \varphi_i' and \varphi_i'' form a dual basis under the inner product \left<\varphi'', \varphi'\right>_k := \frac 1 {|G|} \sum_{g\in G_{\text{reg}}} \varphi''(g) \varphi'(g^{-1}) so that \left<\varphi_i'', \varphi_j'\right>_k = \delta_{ij}.

These relate to ordinary characters as follows: let \chi_1, \chi_2, \ldots,\chi_s be the standard irreducible characters of K[G], so they form an orthonormal basis of R_K(G).

  • The map d:P_k(G) \to R_K(G) satisfies: for each \varphi''\in P_k(G), the function d(\varphi'') is zero on the p-singular conjugancy classes of G.
  • The map e:R_K(G) \to R_k(G) is the transpose of d and is injective.
  • Thus, cde is symmetric and positive definite.

Group S4 with p=2.

Let’s consider the usual character table for S4chartable_s4_2

For the ring R_k(G), we leave only the columns for e and (1,2,3), since the remaining conjugancy classes are 2-singular. Immediately, we obtain some linear relations:

  • d(\chi_{\text{alt}}) = d(\chi_{\text{triv}});
  • d(\chi_1) = d(\chi_2) + d(\chi_{\text{triv}}) = d(\chi_1 \chi_{\text{alt}});

So it remains to consider if d2) is simple. If it weren’t it would be the sum of two 1-dimensional representations. But these are easily classifiable for Sn.

Lemma. There are at most two 1-dimensional representations of Sn over any field: the trivial and the alternating.

Proof

Indeed, these correspond to S_n \to k^* and since the image is abelian, it factors through the commutator [S_n, S_n] = A_n and we obtain S_n/A_n \to k^*. So we are left with the trivial and alternating representations. ♦

So d2) is simple since otherwise it would be d(2χ1) which it clearly is not. Hence:

D = \begin{pmatrix} 1 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1\end{pmatrix}\implies E = D^T = \begin{pmatrix} 1 & 0 \\ 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 1\end{pmatrix}, \ C = DE=\begin{pmatrix} 4 & 2 \\ 2 & 3\end{pmatrix}.

The basis elements of P_k(G) and R_k(G) are thus:

mod_char_s4_mod_2

Note that we do have \left< \varphi_i'', \varphi_j'\right>_k = \delta_{ij} as expected. E.g.

\left< \varphi_1'', \varphi_1'\right>_k = \frac 1 {24}(1\cdot(8\cdot 1) + 8\cdot(2\cdot 1)) = 1.

Also e(\varphi_1'') = \chi_{\text{triv}}+ \chi_{\text{alt}}+ \chi_1 + \chi_1 \chi_{\text{alt}} and e(\varphi_2'') = \chi_2 + \chi_1 + \chi_1\chi_{\text{alt}} take all the 2-singular classes to 0.

Exercise

Let φ be the regular representation. Find the multiplicity of \varphi_1'' and \varphi_2'' in the decomposition of φ, and the multiplicty of \varphi_1' and \varphi_2' among its composition factors.

blue-lin

Group S4 with p=3.

We remove the column (1,2,3) and keep the remaining four. Clearly, d(\chi_{\text{triv}}) and d(\chi_{\text{alt}}) are simple since they’re of dimension 1. Next, we have d(\chi_2) = d(\chi_{\text{triv}}) + d(\chi_{\text{alt}}). It remains to see if d(\chi_1) and d(\chi_1 \chi_{\text{alt}}) are simple. Consider d(\chi_1). If it weren’t simple it must contain a submodule of dimension 1, which we saw is either d(\chi_{\text{triv}}) or d(\chi_{\text{alt}}).

  • In the former case, d(\chi_1) = d(\chi_{\text{triv}})+\varphi where \varphi((1,2,3,4)) = -2. Since dim \varphi = 2 this means both eigenvalues for (1,2,3,4) are -1, and so those for its square (1,3)(2,4) are +1. But this contradicts \varphi((1,2)(3,4)) = -2.
  • In the latter case, d(\chi_1) = d(\chi_{\text{alt}})+\varphi where \varphi((1,2)) = +2 so both eigenvalues for (ab) are +1. This means all elements of S4 have both eigenvalues equal to +1, which is absurd.

Thus d(\chi_1) and d(\chi_1 \chi_{\text{alt}}) are simple and we have:

D = \begin{pmatrix} 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1\end{pmatrix},\ E =\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix},\ C =\begin{pmatrix} 2 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}.

The basis elements of P_k(G) and R_k(G) are thus:

mod_char_s4_mod_3

blue-lin

Group S5 with p=2.

First, we look at the usual character table for S5.

chartable_s5_2

Removing the 2-singular conjugancy classes leaves us with the columns for e, 3-cyc and 5-cyc. Note that d(\chi) = d(\chi\chi_{\text{alt}}) for any character χ so we are left with 4 rows. Next d(\chi_3) = d(\chi_4) + d(\chi_{\text{triv}}) so we are left with d(\chi_{\text{triv}}), d(\chi_1), d(\chi_4), which are clearly linearly independent. Writing them as linear combinations of simple modular characters:

\begin{pmatrix} 1 & 0 & 0 \\ * & * & * \\ * & * & *\end{pmatrix} \begin{pmatrix} \varphi_1' \\ \varphi_2' \\ \varphi_3'\end{pmatrix} = \begin{pmatrix} d(\chi_{\text{triv}}) \\ d(\chi_1) \\ d(\chi_4)\end{pmatrix}.

where the matrix entries are all non-negative integers. It is not hard, albeit rather tedious, to list all possible 2 × 3 matrices. After solving for \varphi_2', \varphi_3', we are further reduced to 12 possibilities (corresponding to their values at e, the 3-cycle and 5-cycle):

((2, 5, 2), (2, -4, -3)),  ((2, -1, -3), (2, -1, 2)), ((2, -1, -3), (2, -4, -3)),

((2, -1, -3), (3, 0, 3)), ((2, -1, -3), (3, -3, -2)), ((2, -1, -3), (4, -2, -1)),

((2, -1, -3), (5, -1, 0)), ((3, 0, -2), (3, -3, -2)), ((3, 0, -2), (4, -2, -1)),

((3, 0, -2), (5, -1, 0)), ((4, 1, -1), (4, -2, -1)), ((4, 1, -1), (5, -1, 0)).

Since |\varphi(g)| \le |\varphi(e)| for any g, this immediately removes the first 7 possibilities. Next \varphi = (3, 0, -2) is invalid since for \varphi((1,2,3,4,5)) = -2 we must have 3 fifth roots of unity summing up to -2, which is impossible. So we’re left with two choices. It turns out \varphi_3' = (4, -2, -1) is the right choice, which we shall show below.

Construction

We need to show that modulo 2, the modular representation d(\chi_4) contains the trivial representation. Recall that \chi_4 is found in the representation W :=\text{Sym}^2 V where V is a 4-dimensional representation given by:

V := \{(x_1, x_2, x_3, x_4, x_5) \in k^5 : \sum_i x_i = 0\}

and G=S_5 acts on V by permuting the coordinates. Another way of expressing this is: v_1, \ldots, v_5 \in V where \sum_i v_i = \mathbf{0} and gG acts on \{v_i\} by taking v_i \mapsto v_{g(i)}. Now W = \text{Sym}^2 V is spanned by v_i v_j where multiplication commutes. A basis of W is given by v_i v_j where 1 \le i \le j \le 4 and

v_5^2 = \sum_{j=1}^4 v_j^2 \pmod 2,\ v_i v_5 =\sum_{j=1}^4 v_i v_j \pmod 2.

In the semisimple case, W contains exactly one copy of the trivial representation but modulo 2, we can find two copies.

  • First, take the subspace X spanned by \sum_{1\le i<j\le 5} v_i v_j\in W, which is G-invariant. Note that this vector is non-zero (simplify via v_5 = v_1 + v_2 + v_3 + v_4).
  • Next, consider the map fWk which takes v_i^2\mapsto 0, v_i v_j \mapsto 1 for all 1≤ij≤4. Note that f(v_5^2) = f(v_1^2 + \ldots + v_4^2) = 0 and f(v_i v_5) = f(v_i(v_1 + \ldots + v_4)) =1 and so we see that f is G-equivariant, where G acts trivially on k. Note that f(X)=0 so we have at least two copies of the trivial representation among the composition factors.

Thus, d(\chi_4) has at least one copy of the trivial representation, and we get:

\varphi_1' = (1,1,1), \quad \varphi_2' = (4, 1, -1), \quad \varphi_3' = (4, -2, -1).

An explicit representation of φ3‘ is given by:

(1,2)\mapsto \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0\end{pmatrix}, (1,2,3,4,5)\mapsto \begin{pmatrix} 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1\\ 0 & 0 & 1 & 1 \end{pmatrix},\text{ so }(1,2,3)\mapsto \begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 1\\ 1 & 0 & 0 & 0\\ 1 & 1 & 0 & 0\end{pmatrix}.

To calculate the modular character values at (1,2,3) and (1,2,3,4,5), we compute their characteristic polynomials, giving x^4 + x^2 + 1 = (x^2 + x + 1)^2 and x^4 + x^3 + x^2 + x + 1 respectively. Lifting the roots of unity to K gives us \omega, \omega, \omega^2, \omega^2 for the first case and \zeta, \zeta^2, \zeta^3, \zeta^4 for the second, where \omega = e^{2\pi i/3} and \zeta = e^{2\pi i/5} so the values are 2(\omega + \omega^2) = -2 and \sum_{j=1}^4 \zeta^j = -1 respectively.

Conclusion

This gives:

D = \begin{pmatrix} 1 & 1 & 0 & 0 & 1 & 1 & 2\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1\end{pmatrix}, E = \begin{pmatrix} 1 & 0 & 0\\ 1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 1 & 0 \\ 1 & 0 & 1 \\ 1 & 0 & 1 \\ 2 & 0 & 1\end{pmatrix} \implies C=\begin{pmatrix}8 & 0 & 4 \\ 0 & 2 & 0 \\ 4 & 0 & 3\end{pmatrix}

mod_char_s5_mod_2

blue-lin

Group S5 with p=3

From the character table of S5, we remove the two columns for 3-cyc and (2+3)cyc. The resulting modular characters satisfy the following:

d(\chi_4) = d(\chi_1) + d(\chi_{\text{alt}}), \quad d(\chi_4 \chi_{\text{alt}}) = d(\chi_1 \chi_{\text{alt}}) + d(\chi_{\text{triv}}).

The remaining 5 modular characters are linearly independent:

chartable_s5_mod_3

On the other hand, denote the 5 simple modular characters of R_k(G) by:

\varphi_1' := d(\chi_{\text{triv}}),\ \varphi_2' := d(\chi_{\text{alt}}), \ \varphi_3',\ \varphi_4',\ \varphi_5'.

Let us say \varphi\in R_k(G) is even if \varphi \cdot \varphi_2' = \varphi. Note that this holds if and only if φ is zero on the odd permutations: 2-cyc and 4-cyc (we’re ignoring the (2+3)cyc column for modular characters mod 3). Since φ is simple if and only if \varphi\cdot \varphi_2' is, we see that \varphi_3', \varphi_4', \varphi_5' are either all even, or exactly one of them is even. The former case is impossible since that would imply \varphi_1', \ldots, \varphi_5' all take the same values for 2-cyc and 4-cyc. Hence, we may assume \varphi_4' = \varphi_3' \cdot\varphi_2' and that \varphi_5' is even.

  • First, d(\chi_1) cannot contain \varphi_1' or \varphi_2'. E.g. if d(\chi_1) = \varphi_1' + \varphi we would have φ(e) = 3 and φ(5-cyc) = -2, which is impossible since we cannot have 3 fifth roots of unity summing to -2.
  • Clearly, d(\chi_1) = \varphi_3' + \varphi_4' = \varphi_3'(1 + \varphi_2') is impossible.
  • Now suppose d(\chi_1) = \varphi_3' + \varphi_5'. This means \varphi_3', \varphi_4', \varphi_5' are all of dimension 2. Since d(\chi_3) is even, it must be \varphi_3' + \varphi_4' + \varphi_5'. Hence we have \varphi_4' = d(\chi_3) - d(\chi_1) which takes 2-cyc to -2. Hence \varphi_3' = \varphi_4' \varphi_2' takes 2-cyc to +2 and so it must be the identity (contradiction).

Hence, we have shown that d(\chi_1) and d(\chi_1 \chi_{\text{alt}}) are both simple. Finally d(\chi_3) is either simple, or contains \varphi_1' + \varphi_2'. The latter would imply d(\chi_3) = \varphi_1' + \varphi_2' + \varphi where \varphi = (4, 0, 0, -1, -4). Hence the eigenvalues for (2+2)cyc are all -1 and since its order is coprime to p=3, the matrix for (2+2)cyc is –I. But then we have (1,3)(2,4)·(1,2,3,4,5) = (1,4,5,3,2) and we cannot have both M^5 = I and (-M)^5 = I.

Thus we may write \varphi_3' = d(\chi_1), \varphi_4' = d(\chi_1 \chi_{\text{alt}}), \varphi_5' = d(\chi_3) and we have:

D = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 &1 & 0 \\ 0 & 1 & 0 & 0 & 1 &0 & 0 \\ 0 & 0 & 1 &0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1&0 & 1 & 0 \\ 0 & 0 & 0 & 0& 0 & 0 & 1 \end{pmatrix}, E = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0& 0 & 1\end{pmatrix}, C = \begin{pmatrix} 2 & 0 & 0 & 1 & 0 \\ 0 & 2 & 1 & 0 & 0 \\ 0 & 1 & 2 & 0 & 0 \\ 1 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 1\end{pmatrix}.

 

Posted in Notes | Tagged , , , , , , | Leave a comment

Modular Representation Theory (II)

We continue our discussion of modular representations; recall that all modules are finitely-generated even if we do not explicitly say so. First, we introduce a new notation: for each projective finitely-generated k[G]-module P, we have a unique projective finitely-generated R[G]-module denoted \tilde P for which \tilde P/\pi \tilde P \cong P.

First we have:

Proposition. The matrices for e and d are transpose of each other. Hence, c = de is positive-semidefinite and symmetric.

Proof

We need to show \left< x, d(y)\right>_k = \left< e(x), y\right>_K for all basis elements x = [P] \in P_k(G) and y = [M] \in R_K(G), i.e. P is projective indecomposable and M is simple. As seen earlier:

  • By definition e(x) = [K\otimes_R Q], where Q:=\tilde P.
  • Write M = K\otimes_R N for some R[G]-module N, free over R, so that d(y) = NN.
  • We need to show \dim_k \text{Hom}_{k[G]}(Q/\pi Q, N/\pi N) = \dim_K \text{Hom}_{K[G]}(K\otimes_R Q, K\otimes_R N).

Consider X:= \text{Hom}_{R[G]}(Q, N) = \text{Hom}_R(Q, N)^G; this is R-free since it is a submodule of a free R-module. Our desired result would follow if we could show:

X/\pi X \cong \text{Hom}_{k[G]}(Q/\pi Q, N/\pi N), \qquad K\otimes_R X \cong \text{Hom}_{K[G]}(K\otimes_R Q, K\otimes_R N).

But since Q is finitely-generated and R[G]-projective, we can write Q\oplus Q' \cong R[G]^n for some R[G]-module Q’ and n>0. Since direct sum commutes with tensor product, it suffices to prove the above two isomorphisms for Q = R[G]. Now this is obvious since in this case, X=N, and we have

\begin{aligned}\text{Hom}_{k[G]}(Q/\pi Q, N/\pi N) &= \text{Hom}_{k[G]}(k[G], N/\pi N) \cong N/\pi N = X/\pi X\\ \text{Hom}_{K[G]}(K\otimes_R Q, K\otimes_R N) &= \text{Hom}_{K[G]}(K[G], K\otimes_R N) = K\otimes_R N = X\otimes_R N\end{aligned}

as desired. ♦

Note

In fact, we shall see later that c is positive definite, or equivalently, it’s injective. If c(x) = 0, then 0=\left<x, de(x)\right>_k= \left<e(x), e(x)\right>_K so e(x) = 0. But a basis of P_k(G) is given by [P] for indecomposable projective P, so it suffices to show that e([P]) \in R_K(G) are linearly independent.

blue-lin

Modular Characters

Recall that a finitely-generated K[G]-module M can be represented by its character \chi_M : G\to K where \chi_M(g) := \text{tr}(g: M\to M), its trace as a K-linear map. Since K[G] is semisimple, standard character theory says \chi_M uniquely determines M. Now, \chi_M is a class function (i.e. it is constant on each conjugancy class) and in fact, forms an orthonormal basis of the space of class functions, where orthonormality follows from Schur’s lemma.

We would like to produce a similar theory for k[G]-modules. The naive approach of taking G\to k where g\mapsto \text{tr}(g: M\to M) does not lead to a satisfactory theory. The better approach is to lift the character to a function GR.

Definition. An element g of G is said to be p-regular if its order is coprime to p; if it is not p-regular, we say it is p-singular. The collection of p-regular elements of G is denoted G_{\text{reg}}. Note that this is a union of conjugancy classes.

Now assume K contains all n-th roots of 1, where n = |G|. It follows that k contains all m-th roots of 1, where m is the largest factor of n coprime to p. The m-th roots of 1 in k are all distinct and the canonical map R → R/π induces a bijection between the m-th roots of 1 in K and in k. Denote this bijection by λ.

Now we are ready to define modular characters.

Definition. Let M be a finitely-generated k[G]-module and g\in G_{\text{reg}}. Consider the k-linear map g:M\to M. Let r be the order of g; since r|m, k contains all the eigenvalues of g, denoted \zeta_1(g), \ldots, \zeta_r(g).

Now the modular character of M is defined as follows:

\varphi_M : G_{\text{reg}} \to K, \quad g \mapsto \sum_i \lambda^{-1}(\zeta_i(g)).

We have the following properties:

  • If N ⊆ M is a submodule, then \varphi_M = \varphi_N + \varphi_{M/N}. This is easily seen by taking a k-basis of N, then extending to give a k-basis of M. Hence \varphi_M =: \varphi_{[M]} is independent of our choice of [M] in the Grothendieck group R_k(G).
  • For any k[G]-module M, consider its k-dual M* with G acting on it via g\cdot f := f\circ g^{-1} : M\to M. Then \varphi_{M^*}(g) = \varphi_M(g^{-1}).
  • For any two k[G]-modules MN, we have \varphi_{M\otimes_k N}(g) = \varphi_M(g) \varphi_N(g) for all p-regular g. [ Note that the tensor product is over k and not k[G]. ]

Next we would like to relate modular characters and standard ones, so suppose N is an R[G]-module. Then we have:

N_K := K\otimes_R N, \qquad N_k := N/\pi N,

K[G]-module and k[G]-module. The first one gives a standard character \chi_N := \chi_{N_K} : G\to K and the second one gives a modular character \varphi_N := \varphi_{N_k} : G_{\text{reg}} \to K. From the definition, we have:

\varphi_N = \chi_N |_{G_{\text{reg}}} : G_{\text{reg}} \to K.

Hence, in our cde diagram:

modular_rep_diagram

we have: \varphi_{d(N)} = \chi_N|_{G_\text{reg}}.

On the other hand, let us compare \varphi_P and \chi_{e(P)}. These clearly give the same values for all p-regular g. Furthermore, the following shows that \varphi_P(g) = 0 when g is p-singular.

Proposition. Let M be a finitely-generated projective R[G]-module. If g is p-singular then \chi_M(g) = 0.

Proof

First, note that M is projective over R[H] for any subgroup H of G since R[G] is free over R[H]. Hence, replacing G with the subgroup generated by g, we may assume G is cyclic and generated by g. This gives:

R[G] = R[T]/\left< T^{m\cdot p^r} - 1\right>

where m is coprime to p and r>0 since g is p-singular. We need to show that multiplication-by-T has trace zero for every indecomposable projective R[G]-module M. But this corresponds to an indecomposable projective k[G]-module N := MM and we may classify all N by decomposing k[G]:

k[G] = k[T]/\left<T^{m\cdot p^r}-1\right> \cong \oplus_i k[T]/\left<(T - \zeta_i)^{p^r}\right>

where each direct-sum factor is indecomposable since its radical \left< T-\zeta_i\right> is of codimension 1. The corresponding R[G]-module is then M_i=R[T]/\left<T^{p^r} - \zeta_i'\right>. Since r>0, multiplication by T has trace zero in each M_i: this is readily seen by taking the basis \{1, T, \ldots, T^{p^r-1}\} which maps to \{T, \ldots, T^{p^r-1}, \zeta_i'\}. ♦

Summary. In the c-d-e diagram above, we have:

\varphi_{d(N)} = \chi_N|_{G_{\text{reg}}}, \qquad \chi_{e(P)} = \varphi_P

where in the second equality, \varphi_P is extended to a function G→K by mapping all p-singular elements to zero.

blue-lin

Computing Pairing over k

Our next job is to find a formula for the pairing:

\left< P, M\right>_k := \dim_k \text{Hom}_{k[G]}(P, M)

for a projective k[G]-module P and general k[G]-module M. First some preliminaries:

Lemma. If M is a projective k[G]-module, then so are M^* and M\otimes_k N for any k[G]-module N.

Proof

Since M is projective, it is a direct summand of some k[G]n. Since dual and tensor product commute with finite direct sums, it suffices to prove this for Mk[G]. In this case, M^* and M\otimes_k N are in fact free over k[G].

To check that a general k[G]-module V is free, it suffices to find a k-subspace W ⊆ V such that V=\oplus_{g\in G} gW. For the dual we pick k\cdot f \subseteq k[G]^* where fk[G]→k takes e to 1 and all other g to 0; and for the tensor product, we pick k\cdot e\otimes_k N \subseteq k[G]\otimes_k N. ♦

Now we compute the pairing:

\left< P, M\right>_k = \dim_k (\overbrace{P^* \otimes_k M}^{:=Q})^G = \dim_k Q^G = \dim_R (\tilde{Q})^G = \dim_K (K\otimes_R \tilde{Q})^G

The third equality follows from this:

  • If Q is a projective k[G]-module, then (\tilde Q)^G is a free R-module and \dim_R (\tilde Q)^G = \dim_k Q^G.
  • [ To prove this, note that (\tilde Q)^G \subseteq \tilde Q is a R-submodule of a free R-module, so it must be R-free. Now write \tilde Q \oplus \tilde M \cong R[G]^n for some R[G]-module \tilde M and so (\tilde Q)^G \oplus (\tilde M)^G\cong (R\cdot \sum_g g)^n. On the other hand, we have Q\oplus M\cong k[G]^n so Q^G \oplus M^G \cong (k\cdot \sum_g g)^n. Comparing terms then gives (\tilde Q)^G/\pi (\tilde Q)^G \cong Q^G. ]

Since \tilde Q is R[G]-projective, its character \chi_{\tilde Q} vanishes on the p-singular elements and the above equals:

\frac 1 {|G|} \sum_{g\in G_{\text{reg}}} \chi_{\tilde Q}(g) =\frac 1{|G|} \sum_{g\in G_{\text{reg}}} \varphi_Q(g) = \frac 1 {|G|} \sum_{g\in G_{\text{reg}}} \varphi_P(g) \varphi_M(g^{-1}).

Thus, we define the following:

Definition. A class function f:G_{\text{reg}} \to K is said to be a p-regular class function. An inner product is defined on the space of all such functions:

\left<\phi, \psi\right>_k := \frac 1 {|G|} \sum_{g\in G_{\text{reg}}} \phi(g) \psi(g^{-1}).

Hence, the basis [P]\in P_k(G) and [M]\in R_k(G) for projective indecomposable P and simple M give rise to dual bases \{\varphi_P\} and \{\varphi_M\} under the above inner product.

We thus have:

Theorem. The map e:P_k(G) \to R_K(G) is injective. Hence r=\text{rank} R_k(G) = \text{rank} P_k(G) is the number of p-regular conjugancy classes of G.

Proof

We will show that for projective indecomposable P, the collection of \varphi_P is linearly independent over K and hence over Z. Indeed, if \sum_P c_P \varphi_P = 0 then taking the inner product with \varphi_M for each simple M gives:

0 = \sum_P c_P \left< \varphi_P, \varphi_M\right>_k\ \forall\ M\implies c_P = 0\ \forall\ P

since {[M]} forms a dual basis for {[P]}.

Note

This also shows that d, being the transpose of e, is surjective when extended to K. In fact, it is even true that d: R_K(G)\to R_k(G) is surjective but the proof of this is rather involved and we may revisit this again some day.

blue-lin

Grothendieck Rings

Given finitely-generated k[G]-modules M and N, their tensor product M\otimes_k N over k is also finitely-generated. By linear algebra, tensor product over a field is always exact for both M\otimes_k - and -\otimes_k M. So if 0 \to M' \to M \to M'' \to 0 is an exact sequence of k[G]-modules, then so is:

0\to M'\otimes_k N \to M\otimes_k N\to M''\otimes_k N \to 0.

This gives a pairing R_k(G) \times R_k(G) \to R_k(G) which is bi-additive. Since tensor product is associative, we obtain a ring structure on R_k(G). Furthermore if M is projective, then so is M\otimes_k N as we saw above. Thus, identifying P_k(G) with its image c(P_k(G)) \subseteq R_k(G), we see that P_k(G) is an ideal of R_k(G).

Example

Let us revisit the example from the previous article. The character table of S3 is given by:

rep_of_s3

For p=3, we have D = \begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 1\end{pmatrix} and E = \begin{pmatrix} 1& 0 \\ 0 & 1\\ 1 & 1\end{pmatrix}. So the modular character tables for P_k(G) and R_k(G) are given by:

modular_char_table_s3

Note that e(\varphi_1'') = \chi_{\text{triv}} + \chi and e(\varphi_2'') = \chi_{\text{alt}} + \chi and both vanish at the 3-cycle as expected. Furthermore one can verify by direct computation that \left< \varphi_i'', \varphi_j'\right>_k = \delta_{ij} taking into account (1,2) has class size of 3.

The Grothendieck ring R_k(G) is isomorphic to \mathbf{Z}[T]/\left<T^2 - 1\right> where {1, T} maps to \{\varphi_1', \varphi_2'\} respectively. Then P_k(G) is the ideal generated by T+2 and R_k(G)/P_k(G) \cong \mathbf{F}_3.

Posted in Notes | Tagged , , , , | Leave a comment

Modular Representation Theory (I)

Let K be a field and G a finite group. We know that when char(K) does not divide |G|, the group algebra K[G] is semisimple. Conversely we have:

Proposition. If char(K) divides |G|, then K[G] is not semisimple.

Proof

Let I := \{ \sum_{g\in G} a_g\cdot g: a_g\in K, \text{ such that } \sum a_g =0\}, a two-sided ideal of K[G]. If K[G] = IJ for some left ideal J of K[G], then dim J = 1 so it is spanned by, say, \alpha = \sum_g b_g\cdot g. Since αg is a scalar multiple of α for each gG, all the bg‘s must be equal to some bK. But this means \alpha = b(\sum_ g g) lies in I since its sum of coefficients is |G|b = 0. ♦

For the case where char(K) divides |G|, we have modular representationt theory. Here, we’ll require some knowledge of complete discrete valuation rings; throughout this article, we adopt the following notations and assumptions.

  • R is a complete discrete valuation ring with maximal ideal generated by π∈R.
  • K is the field of fractions of R.
  • k := R/(π) is the residue field.
  • p := char(k) divides |G| and char(K) = 0.

Representations of G over k are studied by first lifting them to R, then projecting to K. Let R_k(G), R_K(G) be the Grothendieck group of finitely-generated modules over k[G], K[G] respectively. Since char(K) = 0, K[G] is semisimple. Since k[G] is of finite dimension over k, it is artinian; we let J be its Jacobian radical. Finally, let P_k(G) be the Grothendieck group of finitely-generated projective modules over k[G]. From earlier results:

  • R_k(G), R_K(G), P_k(G) are free abelian groups, with the following bases, which we will fix from now on.
    • Basis of R_k(G) or R_K(G): [M] for simple modules M.
    • Basis of P_k(G): [P] for finitely-generated indecomposable projective modules P.
  • We have a map c:P_k(G) \to R_k(G) taking [P] to [P].
  • We have an isomorphism P_k(G) \cong R_k(G), taking [P] to [P/JP]; this corresponds to identity matrix (under the above bases).

Note: if [M] = [N] \in R_k(G), then M and N have identical composition factors but M and N may not be isomorphic. On the other hand, if [M] = [N] \in P_k(G), then the decomposition factors of M and N are the same and thus M ≅ N.

Next, we introduce the following pairings:

  • The pairing \left<-, -\right>_K : R_K(G) \times R_K(G) \to\mathbf{Z} taking [M], [N] \mapsto \dim_K \text{Hom}_{K[G]}(M, N) is a bi-additive map. The above basis for R_K(G) is orthogonal by Schur’s lemma. Extending K if necessary, we may assume it is orthonormal.
  • The pairing \left<-, -\right>_k : P_k(G) \times R_k(G) \to \mathbf{Z} taking [P], [M] \mapsto \dim_k \text{Hom}_{k[G]}(P, M) is a bi-additive map. The given bases for P_k(G) and R_k(G) are dual with respect to this pairing. [ Indeed, for [M] \in R_k(G) with M simple, we have Hom(PM) = Hom(P/JPM) since JM = 0; if P is projective indecomposable, then P/JP is simple and the result follows from Schur’s lemma. Assuming k is large enough, we may assume that \text{Hom}(P, M) = k if P/JP ≅ M. ]

blue-lin

The next step is to define the maps d and e in the following diagram, such that cde:

modular_rep_diagram

Definition of d

To define d:R_K(G) \to R_k(G), let M be a finitely-generated K[G]-module.

Proposition. There is an R[G]-module N such that K\otimes_R N \cong M. The class of this module [N/\pi N] \in R_k(G) depends only on M.

Proof

Step 1. First prove the existence of N.

Let N’ ⊂ M be a free R-module with a basis spanning M, so K\otimes_R N' \cong M as K-vector spaces. Now let N := \sum_{g\in G} gN' which is also a free R-module since it is torsion-free, and has a basis spanning M. Thus K\otimes_R N \cong M. Since N is G-invariant it is an R[G]-module.

Step 2. Proof of uniqueness: initial step.

Suppose N_1, N_2 are R[G]-modules such that K\otimes_R N_1 \cong K\otimes_R N_2. Replacing N_1 by a scalar multiple (this does not affect [N_1]), we may assume N_2 \subseteq N_1. Now for each x\in N_1 we have \pi^r x\in N_2 for some r; since N_1 has a finite basis, we have \pi^r N_1 \subseteq N_2\subseteq N_1 for some r>0.

Step 3. Now we show: if \pi^r N_1 \subseteq N_2\subseteq N_1, then [N_1/\pi N_1]= [N_2/\pi N_2] in R_k(G).

This is by induction on r. When r=1, let T:= N_1/N_2; we have an exact sequence of k[G]-modules:

0\to \pi N_1 / \pi N_2 \to N_2 /\pi N_2 \to N_1/\pi N_1 \to N_1/N_2 \to 0.

Since \pi N_1/\pi N_2 \cong N_1/N_2 we have [N_2/\pi N_2] = [N_1/\pi N_1] in the group R_k(G). For r>1, let N_3 := \pi^{r-1}N_1 + N_2 we have:

\pi^{r-1} N_1\subseteq N_3\subseteq N_1, \qquad \pi N_3\subseteq N_2\subseteq N_3.

By induction hypothesis, [N_1/\pi N_1] = [N_3/\pi N_3] = [N_2/\pi N_2] and we’re done. ♦

We thus define d:R_K(G) \to R_k(G) via: given [M]\in R_K(G) pick an R[G]-module N such that K\otimes_R N \cong M. We then define d([M]) := [N/\pi N].

Note

An exact sequence of K[G]-modules 0\to M'\to M\to M''\to 0 splits as M ≅ M’ ⊕ M”; picking modules N’ and N” for M’ and M” via the above proposition, N := N’ ⊕ N” also satisfies the proposition for M, and we obtain d([M]) = d([M’]) + d([M”]) as desired, so d is well-defined.

Example

Suppose G = {eg} is of order 2. Let RZ2, KQ2 and kF2. Take MK[G] itself. One can obtain the R[G]-module N (of rank 2 over R) in different ways, e.g.:

g \mapsto \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}, \qquad g\mapsto \begin{pmatrix} 1 & 1 \\ 0 & -1\end{pmatrix}.

Indeed, they’re isomorphic over K[G] since the above two matrices are diagonalizable with eigenvalues -1, +1. However, reducing modulo 2 gives non-isomorphic k[G]-modules:

g\mapsto \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \pmod 2, \qquad g\mapsto \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix} \pmod 2.

Note that they have the same composition factors, although one is semisimple and the other is not.

blue-lin

Definition of e

Next, we will define e: P_k(G) \to R_K(G). This is given in two steps: first we consider the map \psi: P_R(G) \to P_k(G), [P] \mapsto [P/\pi P] where P_R(G) is the Grothendieck group of finitely-generated projective R[G]-modules. [ Warning: since R[G] is not artinian, we have to avoid using results from the previous two articles. ] Note that Ψ is well-defined:

  • If P is projective over R[G], then P\oplus Q\cong R[G]^n for some n so (P/\pi P)\oplus (Q/\pi Q)\cong k[G]^n and so PP is projective over k[G].
  • An exact sequence of projective modules must split, so Ψ is a well-defined homomorphism of abelian groups.

The next step is to show that Ψ is an isomorphism. First, injectivity:

Lemma. If P, Q are finitely-generated and projective such that P/\pi P\cong Q/\pi Q as k[G]-modules, then P\cong Q as R[G]-modules.

Proof

Let f_0 : P/\pi P \to Q/\pi Q be an isomorphism of k[G]-modules. Since P is projective, the map P\to P/\pi P \to Q/\pi Q lifts to a map f_1: P\to Q of R[G]-modules such that the reduction of f1 mod π is f0. To show that f1 is bijective, it suffices to show it is an isomorphism of R-modules. But PQ are finite R-free since they are projective over R[G] and thus torsion-free. So we only need to show that det(f1) is invertible in R, i.e. ≠0 modulo π. But det(f1) mod π = det(f0) ≠ 0. ♦

Hence, \psi: P_R(G) \to P_k(G) is injective. Finally, we have the following:

Lemma. For any finitely-generated projective k[G]-module N, there is a finitely-generated projective R[G]-module M such that M/\pi M\cong N.

Proof

We may assume N is indecomposable, so N is a direct summand of k[G], i.e. k[G] = N\oplus N'. The image of 1∈k[G] in N then gives an idempotent xN (i.e. x2=x). We wish to lift x to an idempotent yR[G] such that xy mod π.

  • Claim: if S is any ring and I ⊂ S is an ideal satisfying I2=0, then any idempotent y ∈ S/I lifts to an idempotent x ∈ S.
  • Proof: let z ∈ S be any lift of y. Then x := 3z2-2z3 satisfies, modulo Ix ≡ 3z2-2z3 ≡ 3z – 2zz so it is another lift of y. Now x2x = 0 since it is divisible by (z2z)2 ∈ I= 0. ♦

Now consider the ring S := R[G]/\pi^2 R[G] with ideal I:=\pi R[G]/\pi^2 R[G]. This satisfies I= 0 and S/I ≅ k[G] so by the claim, x lifts to an idempotent y’S.

Repeat the process with the ring S' := R[G]/\pi^3 R[G] with ideal I' := \pi^2 R[G]/\pi^3 R[G]. Again I'^2 = 0 and S’/I’ ≅ S so y’ lifts to an idempotent y”S’. Repeating this process, since R is a complete discrete valuation ring, we obtain an idempotent yR[G] such that xy mod π.

Then R[G] is the direct sum of left modules M := R[G]y and M’ := R[G](1-y), and M/\pi M\cong N. ♦

Thus, Ψ is an isomorphism and we can define:

e : P_k(G) \stackrel{\psi^{-1}}{\longrightarrow} P_R(G) \longrightarrow R_K(G)

where the second map takes [M]\in P_R(G) to [K\otimes_R M] \in R_K(G).

Relations Between d and e

Finally we wish to prove that cde. Indeed, let [P]\in P_k(G) and [Q] = \psi^{-1}([P]) \in P_R(G) so that Q/\pi Q\cong P. Then the element e([P]) is given by [K\otimes_R Q] \in R_K(G). To obtain d([K\otimes_R Q]) we recover Q as the R[G]-module in the definition of d and take: d([K\otimes_R Q]) = [Q/\pi Q] = [P] \in R_k(G) as desired.

blue-lin

Concrete Example

Let’s compute an explicit example for GS3RZ3K = Q3k = F3. It’s easy to calculate R_K(G) since K[G] is semisimple. From elementary character theory, there are three simple modules:

rep_of_s3

  • trivial rep. : g acts as trivially on K;
  • alternating rep.g acts via sgn(g) on K;
  • other : take the subspace of K3 given by (xyz) satisfying x+y+z = 0; G acts by permuting the coordinates.

This gives a basis of R_K(G). On the other hand, let’s consider all simple k[G]-modules. We still have the trivial and alternating representations, clearly simple. But now the third representation has a submodule:

V = \{(x,y,z) \in k^3 : x+y+z = 0\} \implies k\cdot (1,1,1) \in V

since char(k) = 3. This gives a subspace which is the trivial representation, and the quotient is alternating. Hence, R_k(G) has two simple modules, given by the trivial and alternating modules, and the matrix for d:R_K(G) \to R_k(G) is:

D = \begin{pmatrix} 1 & 0 & 1\\ 0 & 1 & 1\end{pmatrix}

Next we compute e; to do that we need to find the indecomposable projective k[G]-modules. These can be found by decomposing k[G] itself. From above, we see that the key is to find idempotents of k[G] (specifically indecomposable projectives correspond to primitive idempotents, but we won’t use that here). Clearly x = \frac 1 2 (e + (1,2)) is idempotent, and we obtain:

\begin{aligned} W := k[G]x &= \left< e + (1,2), (1,3) + (1,2,3), (2,3)+(1,3,2)\right>, \\ W' :=k[G](1-x) &= \left< e - (1,2), (1,3) - (1,2,3), (2,3) - (1,3,2)\right>.\end{aligned}

We leave it to the reader to prove that these are indecomposable. To compute e([W]) and e([W’]), we lift them to projective R[G]-modules then tensor with K: this still gives

\begin{aligned} K[G]x &= \left< e + (1,2), (1,3) + (1,2,3), (2,3)+(1,3,2)\right>, \\ K[G](1-x) &= \left< e - (1,2), (1,3) - (1,2,3), (2,3) - (1,3,2)\right>.\end{aligned}

which has character values (3, 1, 0) and (3, -1, 0), i.e. \chi + \chi_{\text{triv}} and \chi + \chi_{\text{alt}}. So the matrix for e is:

E = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{pmatrix}\implies C=DE =\begin{pmatrix} 2&1 \\ 1&2\end{pmatrix}.

Posted in Notes | Tagged , , , , , | Leave a comment

Projective Modules and the Grothendieck Group

This is a continuation of the previous article. Throughout this article, R is an artinian ring (and hence noetherian) and all modules are finitely-generated. Let K(R) be the Grothendieck group of all finitely-generated R-modules; K(R) is the free abelian group generated by [M] for simple modules M.

Now let P(R) be the Grothendieck group of all finitely-generated projective R-modules. Thus, P(R) is the free abelian group generated by [P] for finitely-generated P, modulo [P] = [Q] + [Q’] for each short exact 0 → Q’ → P → Q → 0. By the lemma here, this means Q is a direct summand of P so P ≅ QQ’.

Theorem. The group P(R) is the free abelian group generated by [P] for indecomposable finitely-generated projective modules P. Furthermore, if [P] = [Q_1] + \ldots + [Q_r] in P(R) for projective modules P, Q_1, \ldots, Q_r, then P \cong Q_1 \oplus \ldots \oplus Q_r.

Proof

By Krull-Schmidt’s theorem, every finitely-generated module P is uniquely written as a direct sum of indecomposable modules; if P is projective, so is each direct summand. Hence [P] is a sum of [Q] for indecomposable projective Q. Furthermore, each short exact 0 → Q’ → P → Q → 0 gives a decomposition P ≅ QQ’ so [P] = [Q] + [Q’] if and only if the modules on the LHS and RHS match after decomposition. The general case of r>2 follows by induction on r. ♦

Now we define a map:

c_R : P(R) \to K(R)

which takes a projective module P to its class [P] in K(R). Note that this is a well-defined group homomorphism. The next map we define is:

f_R : P(R) \to K(R)

which is given by f_R([P]) := [P/JP], where J := J(R) is the Jacobian radical of R. Note that this is well-defined since a short exact sequence 0 → Q’ → P → Q → 0 of projective modules splits to give P ≅ QQ’ and so P/JP ≅ (Q/JQ) ⊕ (Q’/JQ’), and we get [P/JP] = [Q/JQ] + [Q’/JQ’] in K(R). Furthermore, we saw earlier that P \mapsto P/JP gives a bijection between projective indecomposable modules and simple modules. Since P(R) is freely generated by the projective indecomposable modules while K(R) is freely generated by the simple modules, we have:

Theorem. The map f_R : P(R) \to K(R), [P] \mapsto [P/JP] is a group isomorphism.

The map c_R\circ f_R^{-1} : K(R) \to K(R) can be represented by a matrix with integer entries; let’s compute this for a simple example.

blue-lin

Example

Let R be the ring of upper 3 × 3 matrices with real entries. We thus see that:

R = \left\{ \begin{pmatrix} * & * & * \\ 0 & * & * \\ 0 & 0 & *\end{pmatrix} \right\} =\overbrace{\left \{ \begin{pmatrix} * & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{pmatrix}\right \}}^I \oplus \overbrace{\left \{ \begin{pmatrix} 0 & * & 0 \\ 0 & * & 0 \\ 0 & 0 & 0 \end{pmatrix} \right\}}^J \oplus \overbrace{ \left\{ \begin{pmatrix} 0 & 0 & * \\ 0& 0 & * \\ 0 & 0 & *\end{pmatrix} \right\}}^K

is a direct sum of indecomposable projective modules. On the other hand, K is isomorphic to the column of 3-vectors, which has a composition series: 0 \subset \mathbf{R} e_1 \subset \mathbf{R} e_1 \oplus \mathbf{R} e_2 \subset \mathbf{R}^3. Denoting the consecutive factors by ABC, we see that in K(R), we have [I] = [A],  [J] = [A]+[B],  [K] = [A]+[B]+[C], so the matrix is:

\begin{pmatrix} 1 & 1 & 1 \\ 0 & 1 & 1\\ 0 & 0& 1\end{pmatrix}.

Exercise

Calculate the corresponding matrix for the ring:

R=\begin{pmatrix} \mathbf{Q} & \mathbf{Q}(\sqrt 2) & \mathbf{Q}(\sqrt[4]2)\\ 0 & \mathbf{Q}(\sqrt 2) & \mathbf{Q}(\sqrt[4]2) \\ 0 & 0 & \mathbf{Q}(\sqrt[4]2)\end{pmatrix}

Pairing

Next, consider the pairing given by:

\left<-, -\right> : P(R) \times K(R) \to K(\mathbf{Z}), \qquad \left<[P], [M]\right> :=[\text{Hom}_R(P, M)].

We claim that this is well-defined; indeed for the first term, an exact sequence of projective modules splits as P ≅ QQ’ so Hom(PM) is the direct sum of Hom(QM) and Hom(Q’M), and the RHS gives:

[\text{Hom}_R(P,M)] =[\text{Hom}_R(Q, M)]+ [\text{Hom}_R(Q', M)].

On the other hand, if 0 → M’ → M → M” → 0 is an exact sequence of modules, then since P is projective, so is the resulting:

0 \to \text{Hom}_R(P, M') \to\text{Hom}_R(P, M)\to \text{Hom}_R(P, M'')\to 0

and we get [\text{Hom}_R(P, M)] = [\text{Hom}_R(P, M')] + [\text{Hom}_R(P, M'')] as well.

Posted in Notes | Tagged , , , | Leave a comment

Projective Modules and Artinian Rings

Projective Modules

Recall that Hom(M, -) is left-exact: for any module M and exact 0\to N' \to N\to N'', we get an exact sequence 0\to \text{Hom}_R(M, N') \to \text{Hom}_R(M, N) \to \text{Hom}_R(M,N'').

Definition. A module M is projective if Hom(M, -) is exact, i.e. if for any surjective N→N”, the resulting HomR(M, N) → HomR(M, N”) is also surjective.

Unwinding the definition, we get: if f:N→N” is surjective and g:M→N” is any map, then there exists h:M→N such that fh = g.

Some basic properties:

Proposition.

  • The base ring R is projective over itself.
  • If \{M_i\} is a collection of projective modules, then so is \oplus_i M_i.
  • If M⊕M’ is projective, then so are M and M’.

Proof

For the first property, HomR(R, N) is identified with N and thus \text{Hom}_R(R, N) \to \text{Hom}_R(R, N'') is identified with NN”.

For the second, we use the fact that Hom(-, N) turns direct sums into direct products.

  • Suppose NN” is surjective.
  • Since Mi is projective, each HomR(Mi, N)→HomR(Mi, N”) is surjective.
  • Hence ∏i HomR(Mi, N) → ∏i HomR(Mi, N”) is surjective.
  • But this is HomR(⊕iMi, N) → HomR(⊕iMi, N”).
  • Thus ⊕iMis projective.

Finally, if MM’ is projective, then any surjective NN” gives a surjective map HomR(MM’, N)→HomR(MM’, N”), which is the direct sum of HomR(M, N)→HomR(M, N”) and HomR(M’, N)→HomR(M’, N”). Hence both maps are surjective too and MM’ are projective. ♦

The following lemma will be used multiple times.

Lemma. If f : M→P is a surjective map to a projective module P, then P is a direct summand of M, i.e. there exists a module Q such that M\cong P \oplus Q.

Proof

Indeed, given the identity iPP there is a map hPM such that fh = i. By the splitting lemmaP is a direct summand of M.

Corollary.

M is projective if and only if it is a direct summand of a free module.

Proof

⇐: from the above properties, R is projective, so a direct sum of R‘s (i.e. a free module) is also projective. Hence if M is a direct summand of a free module, M is projective.

⇒: pick a free module and a surjective map fFM (e.g. F can be freely generated by elements of M). By the above lemma, M is a direct summand of F. ♦

Here’s a special case.

Lemma. If R is a semisimple ring, then every module is projective.

Proof

Every R-module is a direct sum of simple modules, and each simple module is a direct summand of R (and thus projective). ♦

Example

If we let R be the ring of 2 × 2 upper-triangular matrices with real entries, then from:

R = \left\{\begin{pmatrix} * & * \\ 0 & * \end{pmatrix}\right\} = \left\{ \begin{pmatrix} * & 0 \\ 0 & 0 \end{pmatrix}\right\} \oplus \left\{ \begin{pmatrix} 0 & * \\ 0 & *\end{pmatrix}\right\} = \left\{ \begin{pmatrix} a & a \\ 0 & 0\end{pmatrix}\right\} \oplus \left\{ \begin{pmatrix} 0 & * \\ 0 & * \end{pmatrix} \right\}

we find at least three examples of projective submodules of R. Exercise: find a non-projective R-module.

blue-lin

When the Base Ring is Artinian

Suppose now R is artinian (and hence noetherian). For the rest of this article, we will only look at finitely generated R-modules.

Theorem. There is a bijection between

  1. simple R-modules M up to isomorphism, and
  2. finitely generated indecomposable projective R-modules P up to isomprhism,

via P\mapsto M := P/JP, where J := J(R) is the Jacobson radical of R.

Proof

Note that all modules are of finite length since they’re artinian and noetherian.

Step 1. If PQ are projective, then Hom(PQ) → Hom(P/JPQ/JQ) is surjective.

If fPQ is a map of projective modules, we have f(JP) =J·f(P) ⊆ JQ, so f induces a map P/JPQ/JQ. Conversely, if gP/JP → Q/JQ is a map, then composing with the canonical map gives P → Q/JQ; since P is projective, this pulls back to a map P → Q.

Step 2. If P is projective and indecomposable, then P/JP is simple.

By step 1, End(P) → End(P/JP) is a surjective ring homomorphism. Since P is indecomposable, End(P) is local, and a quotient of a local ring is local, so End(P/JP) is local and thus P/JP is also indecomposable. But then P/JP is semisimple, so it is simple.

Step 3. If PQ are projective and indecomposable such that P/JP ≅ Q/JQ, then P ≅ Q.

Suppose gP/JP → Q/JQ is an isomorphism. By step 1, g comes from some fP → Q. Since g is surjective, Q = f(P) + JQ. But then JQ = f(JP) + J^2 Q, so repeating this gives us Q = f(P) + J^n Q for all n. Since R is artinian, J is nilpotent so f is surjective. The above lemma then says Q is a direct summand of P, so PQ ⊕ Q’ for some Q’. Since P is indecomposable, Q’=0 and PQ.

Step 4. It remains to show: for simple M, there is a projective indecomposable P such that P/JP ≅ M.

Now, M ≅ R/I for a maximal left ideal I of M. By Krull-Schmidt, write R as a direct sum of indecomposable modules \oplus_i P_i \cong R \to M. Each P_i is projective since it is a direct summand of R. Among all P_i \to M one of them is non-zero and is surjective (since M is simple). By the above P_i/JP_i is simple; since it surjects onto M, we get an isomorphism. ♦

Recall that every simple module is a quotient of R; here is an analogous result for projective indecomposable P.

Lemma. If P is a finitely-generated indecomposable projective module over R, then it is a direct summand of R.

Proof

Since P is projective and finitely-generated, we have a surjective map R^n \to P for some n>0. By the above lemma, R^n\cong P\oplus Q since P is projective. Now apply the Krull-Schmidt theorem: since P is indecomposable, it must be a direct summand of R. ♦

Posted in Notes | Tagged , , , , , | 2 Comments