I haven't posted an update on my senior project for a long time now. I'll post a longer explanation and analysis when I'm fully done, but anyone worrying can rest assured that I have my program completed (and I believe bug free, finally). Here are a few images from it:
Showing posts with label csl. Show all posts
Showing posts with label csl. Show all posts
Tuesday, April 27, 2010
Sunday, October 11, 2009
Smith Normal Form...Finally
Okay, so I'm writing this post about Smith Normal Form just as I decided that my first point of attack is going to be avoiding it. This is because the fastest algorithm I found to compute Smith Normal Form is general is
time, whereas an algorithm that uses geometric properties of spaces can compute homology groups in
time. This is a huge gap, although the big issue is going to be how to handle torsion, since the paper that computes homology groups in
assumes the space is embedded into
, which implies that there is no torsion (so the homology groups are all isomorphic to
for some k). Well let's look at why Smith Normal Form is hard to compute. To do that, I should explain what it is.
Consider an m x n matrix A whose entries are elements of a PID (in our case,
). Then we can find invertible square matrices P and Q with entries in the PID such that PAQ is diagonal and
if i < j. This matrix need not be square,
for varying i. Here is how I think you should think about it:
. However, this may cause the elements on the diagonal to get larger. This is probably not obvious to you, but when I explain the algorithm for computing the normal form it should become more clear. The divisibility condition is also one of the weirder things, but it is essentially just removing all the ambiguities caused by the Chinese Remainder Theorem:
. Smith normal form uses the divisibility condition to give the latter, although in the form
.
Okay, so how to actually compute this mysterious Smith normal form? Well, let's use a simple example. Suppose we have the following module that we want to simplify:
. We then do row and column operations to eliminate entries: subtracting two times the first row from the second row gives
. Then subtracting twice the first column from the second column and one times the first column from the third column, we get
, which is our Smith normal form, and we now know that our module can be better written as
. The generators corresponding to each of these three components can be obtained from the column operations that we did. In this case, the generating set is
, and our relations become
and
.
There is still a question, though. What do we do when the top left element doesn't divide the row or the column elements? We need to make sure that P and Q have inverses with integer entries, so how do we do this? The answer lies in Bezout's Identity. Suppose we're looking at a portion of our matrix that looks like
. To make it simpler, let's pretend it just looks like
. It might take a while, but you should be able to see that it doesn't matter what the dimensions of the stretched version is, as long as we multiply by square matrices on either side. So now say we want to eliminate c. How can we do that? Well, what we do is we write
where g is the greatest common divisor of a and c. Then
, so we know that
is invertible because the determinant is 1. Furthermore, when we multiply we get
, and we have eliminated c, just as we wanted. The problem is: this might have undone our elimination of some of the columns! Well, never fear! We can alternate between eliminating the rows and eliminating the columns, leading to a sequence of top left elements where each divides the previous. Therefore at each step the top left element gets smaller, meaning it terminates eventually. (For general PIDs, we create an increasing sequence of ideals
and then use the fact that our ring is a PID to show that it is Noetherian, and therefore the sequence becomes constant eventually). So once both the first row and first column are eliminated, we ignore the element that's now in the top left and proceed to diagonalize the rest.
Each of these row or column operations can be represented by an invertible matrix that can be absorbed into either P or Q (P for row operations, Q for column operations), giving us the construction we wanted before. That the divisibility condition can be satisfied with row and column operations is a simple exercise.
So how does this relate to homology? Well it's relatively simple. We can represent our bundary homomorphisms easily with matrices. The kernel of a homomorphism is just the null space of the corresponding matrix and the image by its column space. So to compute the homology group, we just reduce the matrix into Smith normal form, and then read off the module. The one thing that we need to watch is that we don't use the components that correspond to rows that don't exist, because our module is generated by elements of the null space, not just any columns. That means that the rank of our module can't be more than m, as it could before.
As I said, making progress here seems difficult. The methods to compute Smith normal form are pretty complex and it would be very difficult to surpass them. However, the large gap between
and
gives me hope for finding at least something for orientable manifolds in dimensions higher than 3 that runs faster than just computing the boundary matrices and their smith normal forms.
Consider an m x n matrix A whose entries are elements of a PID (in our case,
- A is a matrix that represents a quotient module of
. Each column represents one generator, and each row represents one relation.
- P represents combining of the relations. This is like how you can take
and
to get
and
by subtracting the second from the first. With P alone you can get into a row reduced form, but not necessarily diagonal.
- Q represents combining the generators. So maybe instead of using the generator set
we want to use
. Q lets us do that.
Okay, so how to actually compute this mysterious Smith normal form? Well, let's use a simple example. Suppose we have the following module that we want to simplify:
- The generators are
- The relations are
and
.
There is still a question, though. What do we do when the top left element doesn't divide the row or the column elements? We need to make sure that P and Q have inverses with integer entries, so how do we do this? The answer lies in Bezout's Identity. Suppose we're looking at a portion of our matrix that looks like
Each of these row or column operations can be represented by an invertible matrix that can be absorbed into either P or Q (P for row operations, Q for column operations), giving us the construction we wanted before. That the divisibility condition can be satisfied with row and column operations is a simple exercise.
So how does this relate to homology? Well it's relatively simple. We can represent our bundary homomorphisms easily with matrices. The kernel of a homomorphism is just the null space of the corresponding matrix and the image by its column space. So to compute the homology group, we just reduce the matrix into Smith normal form, and then read off the module. The one thing that we need to watch is that we don't use the components that correspond to rows that don't exist, because our module is generated by elements of the null space, not just any columns. That means that the rank of our module can't be more than m, as it could before.
As I said, making progress here seems difficult. The methods to compute Smith normal form are pretty complex and it would be very difficult to surpass them. However, the large gap between
Thursday, September 24, 2009
Homology
It's been a while since I gave an update on my research project, and since I got
working on blogger, it's time to write a new post.
So to explain what Smith Normal Form is, I should explain what it does, and to do that I need to explain the problems that I am looking at. I'm going to assume that you are at least somewhat familiar with the concept of a group and quotient groups. If not, read the wikipedia article on it.
So the first important idea is a homology group. Suppose that we have an infinite sequence of abelian groups
and homomorphisms
that satisfy the condition
(that is, applying two of these homomorphisms always results in the identity). Then we see immediately that
. Furthermore, since these are abelian groups, we have
, so the quotient group
is defined. This infinite sequence of abelian groups is called a chain complex and these quotient groups are called the homology groups. The nth homology group is defined as
.
Okay, that's probably pretty opaque if you don't already know what I'm talking about, so here is a simple example (which will need an example of its own to illustrate, but that will come in time) called simplicial homology. We will build up a space as follows: To begin, we start with some set of isolated points, which we will call the 0-skeleton of our space. Next, we'll draw some lines (these are topological lines, so we're assuming that none of them touch except at points in the 0-skeleton and they don't have to be straight) between points of the 0-skeleton. The collection of points and lines that results will be called the 1-skeleton. Now, we take some triangles (that are "filled in") and glue them on so that their boundary is part of the 1-skeleton. The 1-skeleton, along with these triangles, form the 2-skeleton. We create the 3-skeleton, the 4-skeleton, et cetera by gluing in tetrahedra, 4-simplices, 5-simplices, and so on. (this gets harder to visualize past the 2-skeleton, so I'll stop there).
Now let's define a chain complex on this simplicial complex as follows. Our abelian groups
are the free abelian groups generated by the n-simplices (so they'll all be isomorphic to
for some k). The homomorphisms are the boundary maps, with orientation done so that the composition requirement is satisfied. For example, the boundary of a line will be (as one possibility), the formal difference of the start point and the end point (the reason for using addition as the group operation here will become apparent later). Then the boundary of a triangle will be directed so that the differences all cancel out when the boundary is taken again.
So as an example consider the following simplicial complex:
This complex is made up of six points, nine lines, and three triangles. So the chain groups are
, and all the rest are trivial. We can then compute the homology groups. The first homology group is
. I claim that this is isomorphic to
. To see this, consider the top triangle. Suppose that our element of
has a component with the top right line. Then the boundary includes a contribution of the top point. We must then have an equal component of the top left line to cancel out this contribution, but we have the relation that the sum of the boundary of the top triangle is trivial (because of the quotient). That means that we can effectively replace all the components of the top left and top right lines with the bottom line of the top triangle. Similarly, we can do this for the other two triangles and reduce our element to just a combination of the middle three lines. Then for these to have trivial boundary, the three components must all be equal, but they can be anything. This means that our homology group is
, since it is simply determined by how many times it contains this loop around the center. The second homology group is much simpler, since the image of
is trivial because
is trivial, so the second homology group is the kernel of
. The kernel of
is also trivial, since the boundaries of the three triangles are all independent and none of them are empty. This means that the second homology group is the trivial group.
Wooo, if you followed that you probably understand as much as I do about homology (although I was pretty lazy about keeping track of orientation, which is bad)! In the next post on this series I'll explain what this has to do with matrices and Smith normal form.
So to explain what Smith Normal Form is, I should explain what it does, and to do that I need to explain the problems that I am looking at. I'm going to assume that you are at least somewhat familiar with the concept of a group and quotient groups. If not, read the wikipedia article on it.
So the first important idea is a homology group. Suppose that we have an infinite sequence of abelian groups
Okay, that's probably pretty opaque if you don't already know what I'm talking about, so here is a simple example (which will need an example of its own to illustrate, but that will come in time) called simplicial homology. We will build up a space as follows: To begin, we start with some set of isolated points, which we will call the 0-skeleton of our space. Next, we'll draw some lines (these are topological lines, so we're assuming that none of them touch except at points in the 0-skeleton and they don't have to be straight) between points of the 0-skeleton. The collection of points and lines that results will be called the 1-skeleton. Now, we take some triangles (that are "filled in") and glue them on so that their boundary is part of the 1-skeleton. The 1-skeleton, along with these triangles, form the 2-skeleton. We create the 3-skeleton, the 4-skeleton, et cetera by gluing in tetrahedra, 4-simplices, 5-simplices, and so on. (this gets harder to visualize past the 2-skeleton, so I'll stop there).
Now let's define a chain complex on this simplicial complex as follows. Our abelian groups
So as an example consider the following simplicial complex:

This complex is made up of six points, nine lines, and three triangles. So the chain groups are
Wooo, if you followed that you probably understand as much as I do about homology (although I was pretty lazy about keeping track of orientation, which is bad)! In the next post on this series I'll explain what this has to do with matrices and Smith normal form.
Subscribe to:
Comments (Atom)


