Hypermatrix Inverses

June 6, 2009

The last post about multiplicative hyperdeterminants used a form of multiplication of hypermatrices where the last direction of one hypermatrix is contracted over the first direction of another. We can write the product using ordinary product notation so A multiplied by B is just AB. This generalises multiplication of matrices and multipication of matrices with vectors. For two vectors it gives the scalar product.

A more general product can be defined if we contract over more than one of the hypermatrix directions. I will write A(p)B to mean the product of hypermatrices A and B where the last p directions of A are contracted with the first p directions of B in reverse order. When a hypermatrix A of rank r  is multiplied by a hypermatrix B of rank s  the product A(p)B is a hypermatrix of rank r+s-2p. It is only defined if the dimensions of the last p directions of A match the dimensions of the first p directions of B. If A and B are multi-linear operators on vector spaces then the last vector spaces that A operates on must be the dual spaces of the first vector space that B operates on. When a vector space is transformed by a non-singular linear operator, its dual is transformed by the inverse of the operator.

The problem we want to look at now is how to invert a hypermatrix so that we can use it to reverse a multiplication. In other words, if C = AB we want  to solve for A in terms of C and B. We can do this if we have a hypermatrix D with the same format as B, which inverts B in a suitable fashion. Given any matrix D with the same format as B (of rank r) we can form r matrices by contracting over all but one pair of matching directions. If all these matrices are a non-zero multiple of the identity matrix we call D an inverse of B. In particular we need B(r-1)D = xI and D(r-1)B = yI, but these are just 2 out of r ways of doing the contraction. Notice that we say “an” inverse rather than “the” inverse because such an inverse may not exist, and if it does it may not be unique (even up to a factor).

Such an inverse can be used to invert the multiplication C = AB to give the solution A = (1/x) C(r-1)D. This does not require all the properties of the inverse, it just requires B(r-1)D = xI. But our more specific definition of inverse is interesting because there is a means to construct such inverses from hyperdeterminants and other invariants.

The term inverse is justified because if D is an inverse of B then B is an inverse of D. This follows immediately from the symmetry in the definition. Furthermore the usual inverse of a non-singular matrix is also an inverse in the hypermatrix sense, but so is any scalar multiple of the inverse. Why not fix the normalisation? Take the trace of B(r-1)D = xI and D(r-1)B = yI and you get B(r)D = xd1 = ydr (where di is the dimension of the hypermatrix in the ith direction) and similarly for the other directions of the hypermatrix. If the hypermatrix is hypercubical, so that all the directions have the same dimension, then we could fix z = y =1, but that wont work for more general hypermatrix formats. Instead we would have to fix x = 1/d1,  y = 1/detc., but that would be inconsistent with the inverse of a matrix. The best compromise is to not fix the normalisation at all.

Another plus for this definition is that the set of all inverses (with zero included) forms a vector space. The dimension of this vector space is an important attribute of the hypermatrix format and is not easy to calculate even for simple formats. What makes this interesting is that we can form inverses using invariants. This generalises the expression for the inverse in terms of its determinant, which is,

             A-1 = (1/detA) (d/dA) detA

If A is a hyperdeterminant of rank r and K(A) is an invariant of degree m, then the inverse of A with respect to K is defined as

             InvK(A) = (1/K(A)) (d/dA) K(A)

If we contract A with this inverse over all directions we find that

             A(r)InvK(A) = m

This result is true for any homogeneous multivariate polynomial of degree m, but we can use the invariance property of K(A) to derive the stronger result that this inverse is an inverse in the sense defined previously. To see this consider a linear transform of A when its last direction is transformed by a linear operator δA = AE where E is small.

      δK(A) = Tr(  (d/dA)K(A) (r-1) δA  )  = K(A) Tr( InvK(A) (r-1) AE ) + O(E2)

But by the transformation rule for invariants we get

     K(A) -> K(A) det(I+E)m/d

     => δK(A) = (m/d) K(A) Tr(E) + O(E2)

By bringing these together we conclude that

           InvK(A) (r-1) A   = (m/d)I

With similar results for the other directions, which confirms that InvK(A) is an inverse of A.

That is all I have to say about inverses for now, but I’ll leave you with a few questions to consider. Firstly, I have not demonstrated that InvK(InvK(A)) = A. How can this be proved in general? Secondly, the ring of invaraints for a given format is finitely generated. Given a set of generators, we get a set of inverses. Are these linearly independent and do they generate the full vector space of inverses? I dont really know the answers to these questions so if you do know something, please leave a comment about it.

About these ads

5 Responses to “Hypermatrix Inverses”

  1. superstrings Says:

    David Glynn Says:

    June 11, 2009 at 5:17 am e

    Regarding “Hypermatrix Inverses”, which is very interesting. If A and X are hypercubes of format m^r, and we should solve AX = I which is of format m^2, where the r directions of A and X are identified, and r-1 contracted in the product, then this is a set of rm^2 linear equations in m^r variables X. So if there is a solution there will be a large space of solutions if r < m^(r-2). It should be possible to calculate the exact rank of the generic solution space of AX = I. First one should solve AX = 0, the null space. When r = 3 it should be interesting.

  2. superstrings Says:

    By this reasoning we can say that the space of inverses (for generic non-singular) hypermatrices) has dimension D bound by
    m^r >= D >= m^r – rm^2 + 1
    The dimension of the space will be larger if some of the rm^2 equations are not linearly independent. In fact we know that some of them are dependent because tr(AX) = m is the same equation for each of the r directions. With this we can improve the bound to
    m^r >= D >= m^r – rm^2 + r
    There may be more dependencies to find. When r=2 the left inverse of a matrix is also a right inverse so there are more dependencies. For m=2,r=3 the bound is D >= -1 and we know that D = 1, so there must be 2 more dependencies. It may be hard to count all dependencies in the general case, I’m not sure.

    A way to improve the upper bound on D is to pick a specific hypermatrix and find the space of inverses for that example. This could be easy when most of the entries are zero, but we must keep enough non-zero entries so that the inverses exist.

    • David Glynn Says:

      I believe that Inv(Inv(A)) will not in general equal A.
      For example, if K is Cayley’s det_0 function of degree 3 of
      A which is a “diagonal” hypercube of format 3^4.
      Diagonal here means that
      a_1111=b, a_1122=c, a_1133=d,
      a_2211=e, a_2222=f, a_2233=g,
      a_3311=h, a_3322=i, a_3333=j.
      Then det_0(A) = per(b,c,d;e,f,g;h,i,j). Per is the permanent of the 3 by 3 matrix.
      And Inv(A) is also a “diagonal” hypercube X where
      x_1111=fj+gi, x_1122=ej+gh,x_2211=cj+di,x_2222=bj+dh etc.
      So that Inv(Inv(A))_3333 =(fj+gi)(bj+dh) +(ej+gh)(cj+di)
      which not equal to j.det_0(A) unless the characteristic of the
      field is two. (In the case of char 2 one needs another example.)

      Despite this, the mapping A -> Inv(A) should be an invariant one, and it makes it even more interesting that this mapping
      is not always an involution. Inv(A) is the tangent hyperplane at a “point” A of det_0(A)=0, and note that the biduality theorem (see Gelfand, Kapranov and Zelevinski’s book, first chapter) dual of the dual of the hypersurface is the same hypersurface, but it does not imply that Inv(Inv(A))=A, since the tangent hypersurface must have a formula different from Inv(A)=0.
      (I wonder what this formula is?)
      However, for hypercubes that are square matrices it does
      have this formula, so a counterexample must come from higher dimensional hypercubes.

    • PhilG Says:

      Yes I see that you are right. At first I thought there could be a subtle flaw because setting components to zero and taking the derivative is not always the same as taking the derivative and then setting to zero. However, in this case the order makes no difference.

      This makes me wonder in which special cases it is an involution. The cases where I know it is an involution are
      – det(M) for any square matrix
      – det(A) for 2×2×2 hyperdeterminant
      – det_0(A) for 2^2n hypercube
      – fourth degree invariants L,M,N for 2^4 hypercube (because they can be written as 4×4 determinants)
      – any power of the above
      That is already quite a lot of cases. I wonder for which formats the mapping is an involution for the (discriminant) hyperdeterminants.

  3. Pablo34 Says:

    Hi.
    I don’t know much about multidimensional algebra.
    I need to know the definition of multidimensional matrix product.
    I can’t find it anywhere.

    I really would appreciate your help with this issue.

    Thanks a lot


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: