Matrix Rings
In this post, we’ll be entering the matrix.
Let R be a ring. The ring Mn×n(R) is the set of matrices whose entries are elements of R, where the addition and multiplication operations are given by the usual matrix addition and multiplication. To write this down, for a given matrix A, let Aij be the entry on the i-th row and j-th column. Then addition and product are given by:
where
For example, in the case of 2 × 2 matrices, we have:
Since the ring is not commutative, do take care of the product order. In the above example, avoid changing the order of variables in multiplication since ap+bq ≠ ap+qb, for example.
Theorem. The set
forms a ring under matrix addition and multiplication, with unity given by the identity matrix. This is called the (full) matrix ring of R.
We won’t go through the entire list of axioms, but let’s see why associativity holds:
The two sums can be obtained from each other via swapping k and l. In fact, even without the above explicit computations, we can reason out in a “meta”-manner why it has to be true:
[ Upon expansion, the entries of AB are polynomials in the entries of A and B with integer coefficients; since comparing (AB)C and A(BC) is a matter of comparing n2 such polynomials, if it holds for real numbers, it must hold for general rings also. ]
From our understanding of matrix algebras, most of these properties are quite self-evident:
- The ring Mn×n(R) is commutative iff R is commutative.
- For matrices A, B, the transpose (which flips the matrix about the main diagonal, i.e. takes (i, j)-entry to (j, i)-entry) gives
.
- If n > 1, then there are many examples of zero-divisors of Mn×n(R). E.g. if v and w are n-dimensional column vectors with v·w = 0, then (v|v|…|v)t (w|w|…|w) = zero matrix.
Question: does the set of symmetric matrices form a subring of Mn×n(R)? [ Answer (ROT13) : Ab vg qbrf abg; gur cebqhpg bs gjb flzzrgevp zngevprf jvgu vagrtre ragevrf vf abg arprffnevyl flzzrgevp. ]
For Division Rings
One particularly important feature of matrix rings is as follows.
Theorem. If D is a division algebra, then
has no ideals other than {0} and itself.
A ring R ≠ {0} which has no ideals other than {0} and itself is said to be simple (i.e. a simple ring, like a simple individual, has effectively no ideal). We already saw earlier that if R is commutative, then this only happens if R is a field. For the case of non-commutative rings, things are much more complicated.
Proof.
Let E[i, j] be the matrix with all zeros, except for a 1 at the i-th row and j-th column. Let A be any non-zero element of Mn×n(D); we need to show that <A> is the whole ring.
Now A has some non-zero entry, e.g. Aef ≠ 0. The matrix then has only one non-zero entry left: B = Aef E[e, f]. Since D is a division ring, Bef is a unit and thus E[e, f] lies in <A>. Since E[i, e] E[e, f] E[f, j] = E[i, j], this also shows all E[i, j] lie in <A>. It clearly follows that <A> is the whole ring. ♦
Exercise. Let R be a general ring. If I is an ideal of R, then Mn×n(I) clearly is an ideal of Mn×n(R), where Mn×n(I) is the ring of all n × n matrices with entries in I. Is every ideal of Mn×n(R) necessarily of this form?
Note. There’s much more to be said about linear algebra over a division ring, including some subtle difficulties compared with linear algebra over a field. But that’s another story for another day, one that we hope we can come back to.
Let’s consider a general problem.
Definition. If
, where R is not necessarily commutative, such that AB = BA = I, then we say A is invertible. In other words, A is invertible iff it is a unit in the ring of matrices.
Some questions to guide us include:
- If AB = I, does it mean A is invertible? I.e. is BA = I?
- If A is invertible, does it mean At is invertible?
For commutative rings, the answer to both questions is YES.
For Commutative Rings
If we assume R is commutative, we can breathe much more easily. For starters, the determinant function can be extended to matrices over any commutative ring.
Definition. The determinant of an n × n matrix with entries in R is defined by:
where sgn takes an odd (resp. even) permutation to -1 (resp. +1).
It turns out determinant is still multiplicative, i.e. det(A)det(B) = det(AB). This time, rather than proving it via painful algebraic manipulation, we’ll appeal to the meta-reasoning: in expanding both sides, we get to match n2 polynomials in the entries of A and B, with integer coefficients. Since det(A)det(B) = det(AB) hold for real matrices, we conclude that the polynomials on both sides must match.
In particular, if AB=I, then det(A)det(B) = 1, so det(A) is a unit in the ring R.
Conversely, we know from Cramer’s rule that
where adj(A) is the adjugate matrix of A. [ Roughly, entry (i, j) of adj(A) is obtained by removing row i and column j from A, and taking the determinant of the resulting (n-1)×(n-1) matrix with the appropriate sign. ]
Now Cramer’s rule is well-known for real numbers. To argue that it works for a general commutative ring, we’ll appeal to the meta-reasoning again. Both sides, upon expansion, give us n2 polynomials in the entries of A. Since both sides are equal whenever the entries of A are real, the polynomials must themselves be equal.
Now, suppose det(A) is a unit in R. Then we know that AB = I, where .
In conclusion, there exists a matrix B such that AB=I, if and only if det(A) is a unit in R.
On the other hand, by symmetrical argument, we also have: there exists a matrix C such that CA=I, if and only if det(A) is a unit in R. So if det(A) is a unit, then AB = CA = I and it follows that C = CAB = B.
Summary. Let R be a commutative ring. The square matrix A, with entries in R, is invertible if and only if:
- det(A) is a unit;
- there exists a B such that AB=I or BA=I;
- the transpose of A is invertible.
The last statement follows from the fact that A and its transpose have the same determinant.
For Non-Commutative Rings
Unfortunately, all hell breaks loose when R fails to be commutative. For one thing, no one knows how to define a sensible determinant function, although constructions are available for some special cases. E.g. how do we define the determinant of ? Should it be ad–bc, or da–bc, or …what?
Even for 1 × 1 matrices (i.e. elements of R itself), strange things can happen. E.g. it’s possible for a, b, c≠0 to satisfy ba = 1, ca = 0. For a concrete example, let V be the vector space of all infinite sequences of real numbers: (x0, x1, x2, … ) and R the set of all linear maps V → V. This is a ring with addition (f+g)(v) = f(v)+g(v) and multiplication given by composition (the unity 1 is the identity map). Now let
It’s clear that in composing from right to left, ba = 1 and ca = 0.