Google Classroom
GeoGebraGeoGebra Classroom

Solving a System of Linear Equations

Back-Substitution

Suppose we have a system of linear equations whose augmented matrix is row reduced to the following matrix in echelon form: We can write down the corresponding linear system as follows: This linear system can be solved by back-substitution: Starting from the bottom equation, we get . Then substitute it into the equation above it, we get . Finally, we substitute the values of to the first equation and get . That is, the original linear system has a unqiue solution .

Inconsistent Systems

Take a look at the following augmented matrix in echelon form: Its corresponding linear system is as follows: You can see that the last equation can never be satisfied and this implies that this linear system has no solution. A linear system is said to be inconsistent if it has no solution. More generally, we have the following theorem: Theorem: A linear system is inconsistent if and only if its augmented matrix (not necessarily in echelon form) has a row of the form with , or equivalently, the rightmost column of the augmented matrix is a pivot column.

Basic and Free Variables

Consider the following augmented matrix: The corresponding linear system looks like this: Again, we will use the back-substitution to solve the linear system. But we observe that there are more variables than the equations in the system. Therefore, we expect that there will be more than one solution to the system. First, we classify the variables in the system into two types: Basic variable - a variable corresponding to a pivot column Free variable - a variable corresponding to a non-pivot column In our example, are basic variables and are free variables. Free variables are the ones that we can assign any value to them (that’s why they are called “free” variables). We can use the back-substitution to express the basic variables in terms of the free variables as follows: From the bottom equation, we have . Substituting it into the equation above it, we get . Then we substitute the expressions for into the remaining equation and get . Usually, we express free variables as parameters in the solution: Let be real numbers such that . Then the solutions to the system can be written as follows: where are any real numbers. Hence, this linear system has infinitely many solutions. All in all, the general procedure for solving a system of linear equation is as follows:
  1. Write down the augmented matrix of the given linear system
  2. Use the row reduction algorithm to transform the augmented matrix to the one in echelon form.
  3. If a row of the form with appears in any row reduction step, we can stop the row reduction and conclude that the linear system is inconsistent i.e. no solution exists.
  4. If the row of the form described in step 3 does not appear in the matrix in echelon form, then the system is consistent.  
  5. Classify the variables into basic and free variables.
  6. Use back-substitution to find the solution(s) to the linear system. If there is no free variable, the linear system has a unique solution. If there exists at least one free variable, the linear system has infinitely many solutions, which can be expressed in terms of the parameter(s) assigned to the free variable(s).
Remark: If you obtain a matrix in reduced echelon form in step 2, then you can easily write down the solutions (if exist) without the need of back-substitution.

Parametric Vector Form

Suppose a linear system has infinitely many solutions i.e. there exists at least one free variable when the corresponding augmented matrix is row reduced to the one in echelon form. We can express the solutions more elegantly in the so-called parametric vector form. Let’s consider the previous example. We first express the linear system corresponding to the augmented matrix in echelon form as a matrix equation of the form : Then we write the solutions to this matrix equation in vector form: This is the parametric vector form of the solutions to the linear system. We can take a closer look at the above solutions. For simplicity, we write the above solutions as , where is the first column vector, and are the column vectors scalar multiplied by the parameters and respectively. By definition, for any and . In particular, we set . Then . We usually call the vector a particular solution of . Hence, we have That is, for any and , is a solution to . Any matrix equation of the form is called a homogeneous equation. Zero vector is always a solution to any homogeneous equation, which is usually called the trivial solution. Any nonzero solution to a homogeneous equation is called a nontrivial solution. In this example, is certainly a nontrivial solution of . In fact, it can be shown that the set of solutions of is and any solution of can be written as the sum of a particular solution and a solution of the corresponding homogeneous equation .

Exercise

Solve the following system of linear equations: (You can make use of the result from the exercise in the previous page.)

Suppose a system of m linear equations in n variables is expressed as the matrix equation . Show that the following statements are equivalent:

  1. has infinitely many solutions.
  2. The augmented matrix of the linear system is row reduced to the one in echelon form whose linear system has at least one free variable.
  3. The augmented matrix of the linear system is row reduced to the one in echelon form whose linear system has less than n basic variables.
  4. has a nontrivial solution and is consistent.
  5. The set of all n column vectors in is linearly dependent and is consistent. .
  6. The linear transformation corresponding to is not injective and is consistent..