## Matrix Rings

In this post, we’ll be entering the matrix.

Let R be a ring. The ring Mn×n(R) is the set of matrices whose entries are elements of R, where the addition and multiplication operations are given by the usual matrix addition and multiplication. To write this down, for a given matrix A, let Aij be the entry on the i-th row and j-th column. Then addition and product are given by:

$A+B=C, \ AB=D,$ where $C_{ij} = A_{ij} + B_{ij}, \ D_{ij} = \sum_{k=1}^n A_{ik}B_{kj}.$

For example, in the case of 2 × 2 matrices, we have:

$\begin{pmatrix} a & b\\c & d\end{pmatrix}\begin{pmatrix} p & q\\r & s\end{pmatrix} = \begin{pmatrix} ap+bq & ar+bs\\ cp+dq & cr+ds\end{pmatrix}.$

Since the ring is not commutative, do take care of the product order. In the above example, avoid changing the order of variables in multiplication since ap+bq ≠ ap+qb, for example.

Theorem. The set $M_{n\times n}(R)$ forms a ring under matrix addition and multiplication, with unity given by the identity matrix. This is called the (full) matrix ring of R.

We won’t go through the entire list of axioms, but let’s see why associativity holds:

\begin{aligned}(AB)_{ij} = \sum_k A_{ik}B_{kj} &\implies ((AB)C)_{ij} = \sum_l (AB)_{il} C_{lj} = \sum_{k,l} A_{ik}B_{kl}C_{lj}\\ (BC)_{ij} = \sum_k B_{ik}C_{kj} &\implies (A(BC))_{ij} = \sum_l A_{il}(BC)_{lj} =\sum_{k, l} A_{il}B_{lk}C_{kj}. \end{aligned}

The two sums can be obtained from each other via swapping k and l. In fact, even without the above explicit computations, we can reason out in a “meta”-manner why it has to be true:

Upon expansion, the entries of AB are polynomials in the entries of A and B with integer coefficients; since comparing (AB)C and A(BC) is a matter of comparing n2 such polynomials, if it holds for real numbers, it must hold for general rings also. ]

From our understanding of matrix algebras, most of these properties are quite self-evident:

• The ring Mn×n(R) is commutative iff R is commutative.
• For matrices AB, the transpose (which flips the matrix about the main diagonal, i.e. takes (ij)-entry to (ji)-entry) gives $(AB)^t = B^t A^t$.
• If n > 1, then there are many examples of zero-divisors of Mn×n(R). E.g. if v and w are n-dimensional column vectors with v·w = 0, then (v|v|…|v)t (w|w|…|w) = zero matrix.

Question: does the set of symmetric matrices form a subring of Mn×n(R)? [ Answer (ROT13) : Ab vg qbrf abg; gur cebqhpg bs gjb flzzrgevp zngevprf jvgu vagrtre ragevrf vf abg arprffnevyl flzzrgevp. ]

## For Division Rings

One particularly important feature of matrix rings is as follows.

Theorem. If D is a division algebra, then $M_{n\times n}(D)$ has no ideals other than {0} and itself.

A ring R ≠ {0} which has no ideals other than {0} and itself is said to be simple (i.e. a simple ring, like a simple individual, has effectively no ideal). We already saw earlier that if R is commutative, then this only happens if R is a field. For the case of non-commutative rings, things are much more complicated.

Proof.

Let E[ij] be the matrix with all zeros, except for a 1 at the i-th row and j-th column. Let A be any non-zero element of Mn×n(D); we need to show that <A> is the whole ring.

Now A has some non-zero entry, e.g. Aef ≠ 0.  The matrix $B := E[e, e]A E[f, f]$ then has only one non-zero entry left: B = Aef E[ef]. Since D is a division ring, Bef is a unit and thus E[ef] lies in <A>. Since E[i, eE[efE[fj] = E[ij], this also shows all E[ij] lie in <A>. It clearly follows that <A> is the whole ring. ♦

Exercise. Let R be a general ring. If I is an ideal of R, then Mn×n(I) clearly is an ideal of Mn×n(R), where Mn×n(I) is the ring of all n × n matrices with entries in I. Is every ideal of Mn×n(R) necessarily of this form?

Note. There’s much more to be said about linear algebra over a division ring, including some subtle difficulties compared with linear algebra over a field. But that’s another story for another day, one that we hope we can come back to.

Let’s consider a general problem.

Definition. If $A, B\in M_{n\times n}(R)$, where R is not necessarily commutative, such that AB = BA = I, then we say A is invertible. In other words, A is invertible iff it is a unit in the ring of matrices.

Some questions to guide us include:

• If AB = I, does it mean A is invertible? I.e. is BA = I?
• If A is invertible, does it mean At is invertible?

For commutative rings, the answer to both questions is YES.

## For Commutative Rings

If we assume R is commutative, we can breathe much more easily. For starters, the determinant function can be extended to matrices over any commutative ring.

Definition. The determinant of an n × n matrix with entries in R is defined by:

$\det(A) = \sum_{\pi\in S_n} \text{sgn}(\pi) A_{1,\pi(1)} A_{2, \pi(2)}\ldots A_{n,\pi(n)},$

where sgn takes an odd (resp. even) permutation to -1 (resp. +1).

It turns out determinant is still multiplicative, i.e. det(A)det(B) = det(AB). This time, rather than proving it via painful algebraic manipulation, we’ll appeal to the meta-reasoning: in expanding both sides, we get to match n2 polynomials in the entries of A and B, with integer coefficients. Since det(A)det(B) = det(AB) hold for real matrices, we conclude that the polynomials on both sides must match.

In particular, if AB=I, then det(A)det(B) = 1, so det(A) is a unit in the ring R.

Conversely, we know from Cramer’s rule that

$A \cdot \text{adj}(A) = \det(A)\cdot I,$

where adj(A) is the adjugate matrix of A. [ Roughly, entry (ij) of adj(A) is obtained by removing row i and column j from A, and taking the determinant of the resulting (n-1)×(n-1) matrix with the appropriate sign. ]

Now Cramer’s rule is well-known for real numbers. To argue that it works for a general commutative ring, we’ll appeal to the meta-reasoning again. Both sides, upon expansion, give us n2 polynomials in the entries of A. Since both sides are equal whenever the entries of A are real, the polynomials must themselves be equal.

Now, suppose det(A) is a unit in R. Then we know that ABI, where $B = det(A)^{-1} \text{adj}(A)$.

In conclusion, there exists a matrix B such that AB=I, if and only if det(A) is a unit in R.

On the other hand, by symmetrical argument, we also have: there exists a matrix C such that CA=I, if and only if det(A) is a unit in R. So if det(A) is a unit, then ABCAI and it follows that CCABB.

Summary. Let R be a commutative ring. The square matrix A, with entries in R, is invertible if and only if:

• det(A) is a unit;
• there exists a B such that AB=I or BA=I;
• the transpose of A is invertible.

The last statement follows from the fact that A and its transpose have the same determinant.

## For Non-Commutative Rings

Unfortunately, all hell breaks loose when R fails to be commutative. For one thing, no one knows how to define a sensible determinant function, although constructions are available for some special cases. E.g. how do we define the determinant of $\begin{pmatrix} a & b \\ c & d\end{pmatrix}$? Should it be adbc, or dabc, or …what?

Even for 1 × 1 matrices (i.e. elements of R itself), strange things can happen. E.g. it’s possible for abc≠0 to satisfy ba = 1, ca = 0. For a concrete example, let V be the vector space of all infinite sequences of real numbers: (x0x1x2, … ) and R the set of all linear maps V → V. This is a ring with addition (f+g)(v) = f(v)+g(v) and multiplication given by composition (the unity 1 is the identity map). Now let

\begin{aligned}a(x_0, x_1, x_2,\ldots) &= (0, x_0, x_1, \ldots)\\ b(x_0, x_1, x_2, \ldots) &= (x_1, x_2, x_3, \ldots)\\ c(x_0, x_1, x_2, \ldots) &= (x_0, 0, 0, \ldots).\end{aligned}

It’s clear that in composing from right to left, ba = 1 and ca = 0.

This entry was posted in Notes and tagged , , , , , , . Bookmark the permalink.