Tuesday 7 May 2019

Visualizing linear algebra: Matrix inverse

Figure 1: A linear system of equations
This is Part 4 in a series on linear algebra [1].

In the series so far, matrices have been used to transform a vector space. This is particularly useful for applications such as computer graphics and robotics. However its most general use is in the solving of linear systems of equations. In a linear equation, the only thing happening to each variable is that it is being scaled by some constant. And the only thing happening to each of those scaled variables is that they are added to each other.

Figure 1 shows two linear equations. These two equations are packaged into a single vector equation where the 2x2 matrix A contains all the constant coefficients, vector x contains the variables x and y, and vector v contains the constants. Per the equation itself, vector x multiplied by matrix A equals the vector v.

Matrix A represents a linear transformation, so solving Ax = v means we're looking for a vector x which, after applying the transformation to it, lands on vector v. The way to find vector x is by applying the transformation in reverse. That is, when the inverse transformation is applied to vector v, vector v will end up landing on vector x. The inverse of A is represented as A-1. AA-1 = A-1A = I, where I is the identity matrix [2]. Geometrically, applying a transformation to a vector space and then applying the inverse transformation restores the original vector space. This is equivalent to applying the identity matrix which leaves the vector space unchanged. The identity (or unit) matrix is defined as:

[1 0]
[0 1]

Notice that the identity matrix columns are simply the unit vectors i-hat and j-hat. Now multiplying both sides of the equation Ax = v by A-1 results in the equation A-1Ax = Ix = x = A-1v.

The equation for calculating the inverse of a matrix is:

[a b]-1 = 1/(ad - bc)[ d -b]
[c d]                [-c  a]

Note that the determinant of the matrix appears in the expression (ad - bc). There will be an inverse matrix as long as the determinant is non-zero. Calculating A-1:

[2 2]-1 = 1/(2*3 - 2*1)[ 3 -2] = 1/43 -2]
[1 3]                  [-1  2]      [-1  2]

Figure 2: Solving the equations
Solving our vector equation:

x = A-1v
  = 1/4[ 3 -2][-4]
       [-1  2][-1]
  = 1/4*(-4[ 3] + -1*[-2])
           [-1]      [-2]
  = 1/4[-4* 3 + -1*-2]
       [-4*-1 + -1* 2]
  = 1/4[-10]
       [  2]
  = [-2.5]
    [ 0.5]

Plugging x and y into the original linear equations:

2x + 2y = 2 * -2.5 + 2 * 0.5 = -5 + 1 = -4
1x + 3y = 1 * -2.5 + 3 * 0.5 = -2.5 + 1.5 = -1

That confirms that the correct solution has been found. Figure 2 shows the geometric representation of the vector equation.

The set of all possible outputs for a matrix, whether a plane, a line, a 3D space, and so on, is called the column space of the matrix (the span of the columns). The number of dimensions in the column space of a matrix is its rank. In the above example, the rank is two (and, since the determinant is non-zero, the matrix is full rank). If the determinant is zero and the area reduces to a line or point, then the rank is one or zero respectively. Solutions only exist in the rank one case if the vector falls on the line. The zero vector is included in the column space and the set of vectors that land on the origin is called the null space or kernel of a matrix.

Next up: the dot product

--

[1] The figures and examples of the posts in this series are based on the Essence of Linear Algebra series by 3Blue1Brown.

[2] The inverse of a matrix is analogous to the reciprocal of a number. Just as 8*8-1 = 1, so AA-1 = I, where I is the identity matrix (itself analogous to the number 1).

No comments:

Post a Comment