Linear Combinations
This is a visually large applet, best viewed on a computer.
On the left panel, you can control two vectors, (purple) and (orange). You can also toggle "arrows" vs. "functions", but leave that alone for now.
On the right panel, you can drag a point to control a pair of coefficients.
The coefficients you choose on the right panel are also printed near the top of the left panel. They are used to make a LINEAR COMBINATION of and , with the result being called (blue). Some algebraic steps are shown so that you can see how your coefficients affect the calculation and the result.
Think of and as ingredients that make an unnamed function whose
- input is the coordinate vector on the right panel, and whose
- output is on the left panel.
- Their lattice collapses to a line or even just a point--can you get that to happen? Either way, the output is trapped on this line (or point) and can't reach other parts of the plane.
- It becomes possible to reach the origin through a non-trivial linear combination. That is, there's a place other than where you can put the coefficient vector (right panel) so that . Now when you add that funny coefficient vector to any other, doesn't move. In other words: anywhere can be, it can be in infinitely many ways.
- Their lattice does not collapse. (Using linear combinations of them, you can get anywhere in the plane. We say and "span the plane".) The unnamed function we've been talking about is "onto the plane" or "surjective".
- There's no input coefficient vector other than that produces the output . This property is contagious: no point in the plane is reachable in multiple ways. Different input coefficient vectors always produce different output vectors. The unnamed function we've been talking about is "one to one" or "injective".
- Since the unnamed function is one to one and onto, it is invertible.
- In this case, the set is linearly independent and spans the plane. We call it a basis for the plane.
- Use the input coefficient vector . The output function should be the Zero Function, the most boring function of all. Can you produce the Zero Function by using any other input coefficient vector? This should be impossible (can you prove it?) with the default functions, which are linearly independent. Try again with the functions and . What do these "linearly dependent" functions have in common, either algebraically or in their graphs?
- Lock one coordinate or the other at 0 to see multiples of just one function.
- The space of polynomials of degree at most 1 is a 2-dimensional vector space. That means, if and are linearly independent polynomials of degree at most 1, they'll span . (Initially, they are, but maybe you've changed that.) Are you able to produce every polynomial function of the form (their graphs are non-vertical lines) by choosing coefficients?
- You might be interested in , the space of polynomials of degree at most 2. That space has dimension 3, so we can't span it with just two functions; however, you might set (or some other simple degree 1 polynomial function) and (or some other simple degree 2 polynomial function). Now play with the input coefficient vector and see what polynomials you can produce.
- Play with and . Notice how linear combinations of these two functions all have basically the same shape.
- Play with and . Why is this situation so different from #5?