Friday 17 May 2019

Visualizing linear algebra: Vector spaces

Figure 1: Functions, like vectors, can be added
This is Part 9 in a series on linear algebra [1].

A vector can be represented numerically as a list of numbers or geometrically as an arrow. But is one representation more fundamental than the other or are they both manifestations of something deeper?

A list of numbers seems clear cut and unambiguous when dealing with different dimensions, whether a 2D vector or a 100D vector. Whereas a 4D or higher space can be difficult to describe geometrically. On the other hand, the vector space exists independently of the coordinates that you give it and those coordinates can seem somewhat arbitrary since they depend on your choice of basis vectors. For example, determinants and eigenvectors are inherently spatial (the determinant tells you how much a transformation scales areas and eigenvectors stay on their own span during a transformation) and you can freely change your coordinate system without changing the underlying values of either one.

So if vectors are not fundamentally lists of numbers and their underlying essence is more spatial, then that raises the question of what mathematicians mean when they use a word like space or spatial.

Figure 2: The sum of two functions
It turns out that there is something which is neither an arrow nor a list of numbers but is also a type of vector: functions. In the same way that you can add two vectors together, there is also a sensible notion for adding two functions f and g to get a new function (f + g). That is, the value of the sum function at any given input x is the sum of the values of f(x) + g(x). See Figure 1 which shows two functions and Figure 2 which shows their sum. This is represented as:

(f + g)(x) = f(x) + g(x)

Figure 3: Functions, like vectors, can be scaled
This is similar to adding vectors coordinate by coordinate except that there is, in a sense, infinitely many coordinates to deal with. Similarly, there is a sensible notion for scaling a function by a real number. Just scale all of the outputs by that number For example, as illustrated in Figure 3:

(2f)(x) = 2f(x)

Again, this is analogous to scaling a vector coordinate by coordinate but with infinitely many coordinates.

Now, given that the only thing vectors can really do is get added together or scaled, it seems like we should be able to take the same useful constructs and problem solving techniques of linear algebra (linear transformations, null space, dot products, eigen-everything) that were originally thought about in the context of arrows in space and apply them to functions as well. Such as, for example, the derivative that transforms one function into another function:

ddx(3x3 - x) = 9x2 - 1

These are sometimes called operators instead of transformations, but the meaning is the same.

Figure 4: Additivity
A transformation is linear if it satisfies two properties, commonly called additivity (see Figure 4) and scaling (see Figure 5).

Additivity means that if you add two vectors, v and w, then apply a transformation to their sum, you get the same result as if you added the transformed versions of v and w. That is:

L(v + w) = L(v) + L(w)

Figure 5: Scaling
The scaling property is that when you scale a vector v by some number, then apply the transformation, you get the same ultimate vector as if you scale the transformed version of v by that same amount. That is:

L(c.v) = c.L(v)

So linear transformations preserve the operations of vector addition and scalar multiplication. These properties also apply to functions such as the derivative. For example:

ddx(x3 + x2) = ddx(x3) + ddx(x2)

ddx(4x3) = 4 ddx(x3)

Consider the vector space for polynomials. Coordinates are needed for this space which requires choosing a basis. Since polynomials are already written down as the sum of scaled powers of the variable x, it's natural to just choose pure powers of x as the basis function. The first basis function will be the constant function b0(x)=1, the second basis function will be b1(x)=x, then b2(x)=x2, then b3(x)=x3, and so on. The role that these basis functions serve will be similar to the roles of i-hat, j-hat and k-hat in the world of vectors as arrows. For example, the vector [5;3;1;0;0...] (with an infinite tail of zeroes) would be expressed in terms of the basis functions as:

5*1 + 3x + 1x2

Most of the concepts that apply to arrows in space such as linear transformations, the dot product and eigenvectors have direct analogues in the world of functions but with different names such as linear operators, the inner product and eigenfunctions.

Figure 6: Vector spaces
There are lots of vector-like things in math. As long as you're dealing with a set of objects where there's a reasonable notion of scaling and adding, whether that's a set of arrows in space, lists of numbers, functions or whatever else you choose to define, all of the tools developed in linear algebra regarding vectors, linear transformations and so on should be able to apply.

A mathematician need not think about all the different kinds of vector spaces that people might come up with. Instead there are eight axioms that any vector space must satisfy if all the theory and constructs that have been discovered are going to apply. These axioms are not so much fundamental rules of nature as they are an interface between the mathematician who is discovering results and other people who might want to apply those results to new sorts of vector spaces. The axioms serve as a checklist for verifying one's definitions before applying the results of linear algebra. Whereas mathematicians express their results abstractly and only in terms of these axioms. Thus the mathematician's answer to "What are vectors?" is to just ignore the question since the form that vectors take doesn't really matter as long as those rules are followed. Similarly, textbooks and lectures tend to be phrased abstractly to describe the most general case.

And this wraps up the series on visualizing linear algebra!

--

[1] The figures and examples of the posts in this series are based on the Essence of Linear Algebra series by 3Blue1Brown.

1 comment:

  1. Wow! Thank you so much for this explanation! The essence of vector spaces makes so much more sense now.

    ReplyDelete