Before I go back to some more number theory blogging, here is another post about one of the curious properties of Cayley’s Hyperdeterminant.

Some times it is useful to look at hypermatrices that have extra symmetries. For example we might be interested in a 3×3 hypermatrix A whose components a_{ijk} form a symmetric form i.e.

a_{ijk} = a_{ikj} = a_{jik}

This means that the hypermatrix has just four independent components

a_{111} = a

a_{112} = a_{121} = a_{211} = b/3

a_{221} = a_{212} = a_{122} = c/3

a_{222} = d

This hypermatrix often comes up because it represents a symmetric bilinear form or more simply a cubic polynomial

A(x) = a x^{3} + b x^{2}y + c xy^{2} + d z^{3}

The invariance group of the hypermatrix that preserves the symmetry structure is reduced from SL(2) X SL(2) X SL(2) to just SL(2). The hyperdeterminant just reduces to the discriminant of the polynomial

-27 det(A) = b^{2}c^{2} – 4ac^{3} – 4b^{3}d – 27 a^{2}d^{2} + 18 abcd

For higher dimensional hypermatrices an analogous result holds but the hypermatrix reduces to a power of the discriminant, e.g. for a 2x2x2x2 hypermatrix the hyperdeterminant is the fourth power of the discriminant times some factor.

There are other ways that hypermatrices can be constrained that still retain some symmetry. Suppose we demand that the hypermatrix is invariant under some fixed set of linear transforms. E.g. for a m^{n} hypermatrixwe select n different mxm matrices and apply them as a transofrmation on the hypermatrix, then we require that this gives the same hypermatrix. For which sets of matrices is this possible in a non-trivial way? The question can be made more general if we allow indices to be transposed as well so that the symmetric hypermatrices already considered are a special case. I wont attempt to give a complete answer but I’ll look at some special cases, without the transpositions.

A useful special case for matrices is when the transposition matrices are permutation matrices. An m x m matrix M can be transformed to give PMQ where P and M are matrices which just permute the m dimensions. This will generate a permutation of the m2 elements in the matrix. So to impose that the matrix is invariant we would require that the elements of the matrix in any cycle are the same. Circulant matrices are obviously special cases of this.

When some of the cycles are of even length we can put a sign factor in the tranasformation matrix. For example, transpose a 2×2 matrix in both directions using J =

(0 -1) (1 0)

An invariant matrix will have the form

(a -b) (b a)

Which is the familair representaion of the complex numbers.

The residual symmetry is reduced from SL(2) x SL(2) to transformation matrices that commute with J, i.e. SO(2) x SO(2).

We could try and generalise this to the 2^{n} hypermatrix but it only works when n is even. For odd numbers of dimensions the constraint makes the hypermatrix all zero.

However, if we use instead the matrix K

(0 1) (1 0)

Then the result is non-trivial for 3 (or any number of) dimensions and the residual symmetry is SO(1,1) X SO(1,1) X SO(1,1)

This means that opposing elements of the hypermatrix A must be equal

a_{111} = a_{222} = a/2

a_{112} = a_{221} = b/2

a_{121} = a_{212} = c/2

a_{122} = a_{211} = -d/2

For this case something nice happens for the hyperdeterminant. It factorises!

det(A) = (a + b + c – d)(a + b – c + d)(a – b + c + d)(-a + b + c + d)/16

Classical geometry students will instantly recognise this from Brahmagupta’s formula for the area of a quadrilateral inscribed in a circle with sides of length a, b, c and d, which generalises Heron’s formula for the area of a triangle. We can write is as.

Area = sqrt(det(A))

This is quite pleasing because geometric interpretations of hyperdeterminants are quite hard to come by.

Notice that I had to introduce a minus sign on one pair of opposing numbers to get this to work. This was also the case when comparing the formula for regular Diophantine Quadruples with the hyperdeterminant, so in some sense it seems to be a natural thing to do.