Consider an m x n matrix A whose entries are elements of a PID (in our case, ). Then we can find invertible square matrices P and Q with entries in the PID such that PAQ is diagonal and if i < j. This matrix need not be square, for varying i. Here is how I think you should think about it:
- A is a matrix that represents a quotient module of . Each column represents one generator, and each row represents one relation.
- P represents combining of the relations. This is like how you can take and to get and by subtracting the second from the first. With P alone you can get into a row reduced form, but not necessarily diagonal.
- Q represents combining the generators. So maybe instead of using the generator set we want to use . Q lets us do that.
Okay, so how to actually compute this mysterious Smith normal form? Well, let's use a simple example. Suppose we have the following module that we want to simplify:
- The generators are
- The relations are and .
There is still a question, though. What do we do when the top left element doesn't divide the row or the column elements? We need to make sure that P and Q have inverses with integer entries, so how do we do this? The answer lies in Bezout's Identity. Suppose we're looking at a portion of our matrix that looks like . To make it simpler, let's pretend it just looks like . It might take a while, but you should be able to see that it doesn't matter what the dimensions of the stretched version is, as long as we multiply by square matrices on either side. So now say we want to eliminate c. How can we do that? Well, what we do is we write where g is the greatest common divisor of a and c. Then , so we know that is invertible because the determinant is 1. Furthermore, when we multiply we get , and we have eliminated c, just as we wanted. The problem is: this might have undone our elimination of some of the columns! Well, never fear! We can alternate between eliminating the rows and eliminating the columns, leading to a sequence of top left elements where each divides the previous. Therefore at each step the top left element gets smaller, meaning it terminates eventually. (For general PIDs, we create an increasing sequence of ideals and then use the fact that our ring is a PID to show that it is Noetherian, and therefore the sequence becomes constant eventually). So once both the first row and first column are eliminated, we ignore the element that's now in the top left and proceed to diagonalize the rest.
Each of these row or column operations can be represented by an invertible matrix that can be absorbed into either P or Q (P for row operations, Q for column operations), giving us the construction we wanted before. That the divisibility condition can be satisfied with row and column operations is a simple exercise.
So how does this relate to homology? Well it's relatively simple. We can represent our bundary homomorphisms easily with matrices. The kernel of a homomorphism is just the null space of the corresponding matrix and the image by its column space. So to compute the homology group, we just reduce the matrix into Smith normal form, and then read off the module. The one thing that we need to watch is that we don't use the components that correspond to rows that don't exist, because our module is generated by elements of the null space, not just any columns. That means that the rank of our module can't be more than m, as it could before.
As I said, making progress here seems difficult. The methods to compute Smith normal form are pretty complex and it would be very difficult to surpass them. However, the large gap between and gives me hope for finding at least something for orientable manifolds in dimensions higher than 3 that runs faster than just computing the boundary matrices and their smith normal forms.
No comments:
Post a Comment