Orthogonal Decomposition
Orthogonal Projection onto a Subspace
Let be a vector in . We define the orthogonal projection of onto a plane through the origin, denoted by , as follows:
, where is a vector in such that is in .
The intuition for this definition can be illustrated by the applet below. is the black vector and the plane through the origin is spanned by the orthogonal basis .
In fact, can be explicitly computed as follows:
We can verify that is in :
Orthogonal Decomposition Theorem
In general, let be a vector in and let be a subspace in with an orthogonal basis . Then we have the following theorem:
Orthogonal Decomposition Theorem: can be decomposed uniquely as the sum of two vectors, where is a vector in and is a vector in . In fact, i.e. the orthogonal projection of onto , which can be explicitly expressed as follows:
Proof: Let be the vector in defined by the above formula. It suffices to show that is in . Similar to what we have done before, we have
for , which implies is in .
As for the uniqueness, we let be another decomposition such that is in and is in . Then , which implies . So is in . But is also in . Hence, , which means . Then and the decomposition is thus unique.
Exercise
Let and .
The Best Approximation Theorem
From the applet above, you can see that when the vector in is written as the orthogonal decomposition , where , can be regarded as the closest distance from the point at the arrowhead of to the plane . This intuition gives rise to the following theorem:
The Best Approximation Theorem: Let be a subspace of , let be any vector in . Suppose . Then is the closest point in to in the following sense:
for any in distinct from .
Proof: Let be any vector in distinct from . Then is a nonzero vector in .
Consider . By definition, is in and is in . So . By Pythagorean theorem, we have
Since is nonzero, we have , which implies .
This theorem is particularly useful when we discuss the least-squares method later.