## Hypermatrix Diagonalisation

October 10, 2008

If you’ve done much matrix algebra you will know that many linear problems can be quickly solved using the method of diagonalisation. The symmetry of the problem is used to transform the matrix until only the diagonal elements are non-zero. Matrix multiplication then just involves multiplying the diagonal elements (known as the eigenvalues) and the determinant is just their product.

It would be nice if we could do a similar trick with multi-dimensional hypermatrices. A hypermatrix does not have a specific set of components that form a diagonal but we could choose one of the diagonals and try to use linear transforms to eliminate the other elements. If that fails we could look for other ways to reduce most of the components to zero so that computations simplify. In general the diagonalisation trick does not work that well for hypermatrices and it is easy to see why. A hypermatrix of rank n with Nn components can be transformed using n linear transforms which have N2-1 degrees of freedom. Just by counting we can see that less than nN2 components can be eliminated by a general method. If n > 2 and N or n is large then only a minority of the components can be removed.

Nevertheless, for specific small formats the elimination method is still useful and often we are interested in properties of small hypermatrices. An example already came up in the construction of cayley’s hyperdeterminant. For the 2x2x2 hypermatrix A we demonstrated that if the hyperdeterminant is not zero then the matrix (over the complex numbers) can be written in this form

A = f1 x*1y*1z*1 + f2 x*2y*2z*2

And we can normalise the vectors so that x1Λ x2 = y1Λ y2  = z1Λ z2 = 1.

Although we did not mention it at the time, we can use special linear transforms to rotate to a system where these vectors form a basis for each of the three vector spaces. The hypermatrix A then only has two non-zero elements (a111 = f1 and  a222 = f2) and the other six are zero. For this specific case the hypermatrix can be diagonalised and the hyperdeterminant in this form is just det(A) = f12 f22 .

What of the case where the hyperdeteriminant is zero? Then it is not always possible to diagonalise the 2x2x2 hypermatrix in the same way. However we can eliminate a different set of elements. To see this first recall that when the hyperdeterminant det(A) is zero there are three vectors x1, y1, z1 such that contracting any two with A gives a zero vector. Choose a second set of vectors that are linearly independent of these three and normalise so that x1Λ x2 = y1Λ y2  = z1Λ z2 = 1. Then we can use these as a basis in which x1= (1,0), x2= (0,1) etc. To get the zero vectors from the contractions it must then follow that

a111 = a112 = a121 = a211   = 0

So half the components have been eliminated. A similar argument works for any hypermatrix whose hyperdeterminant is zero to eliminante the first element and all others in the same row in each of the directions.

Another way to get rid of components is to use an elimination process similar to the familiar one for matrices. Consider a transformation by a matrix whose diagonal elements are all one and whose off diagonal elements are all zero except for one exception which has a value u. Such a matrix always has determinant one so the hyperdteriminant (and other invariants) are unchanged under such a transformation. You can see that this transformation on a hypermatrix of rank n is actually equivalent to taking an (n-1) dimensional slice from the hypermatrix, multiplying by u and adding that to a different slice with the same orientation. This generalises the rule that the determinant of a matrix is unchanged when you add a multiple of a row (or column) to another row (o column), and it is this rule that can be used to eliminate elements

We will be using some of these methods later.