In theory Spanning sets we have seen that it is sometimes possible to span a space with fewer vectors than given. Here, we will elaborate on this possibility. The goal is to obtain as few spanning vectors as possible from a set of vectors that already spans the space. To this end, we will discuss a number of operations with spans which are the abstract counterparts of the elementary row operations of the chapter Systems of linear equations and matrices.
Let #\vec{a}_1 ,\ldots ,\vec{a}_m# be a set of vectors of a vector space. The spanned space #\linspan{\vec{a}_1 ,\ldots ,\vec{a}_m}# does not change if we
- change the order of #\vec{a}_1 ,\ldots ,\vec{a}_m#,
- multiply one of the vectors by a number distinct from # 0#; that is, replace a vector #\vec{a}_i# by #\mu\cdot \vec{a}_i# with #\mu \ne0#,
- add a scalar multiple of one of the vectors to another; that is to say: replace a vector #\vec{a}_i# by #\vec{a}_i +\mu\cdot \vec{a}_j# with #j\neq i# and #\mu# a scalar,
- insert the zero vector somewhere: #\linspan{\vec{a}_1 ,\ldots ,\vec{a}_m} = \linspan{\vec{a}_1 ,\ldots ,\vec{a}_m, \vec{0}}#,
- omit the zero vector: remove #\vec{a}_i# if it is the zero vector,
- add a linear combination of #\vec{a}_1 ,\ldots ,\vec{a}_m#,
- omit the vector #\vec{a}_i# if it is dependent on the other vectors.
If we take as a vector space #\mathbb{R}^n#, the first three operations correspond to the basic operations on the #m# rows #\vec{a}_1,\ldots,\vec{a}_m# of an #(m\times n)#-matrix.
The remaining four operations concern adding or deleting rows. Also these operations on the rows of the augmented matrix of a system of linear equations result in the augmented matrix of an equivalent system.
For each of these seven statements, the proof comes down to the fact that every linear combination \[ \lambda_1\cdot\vec{a}_1+\cdots+\lambda_n\cdot\vec{a}_m\] can be written as a linear combination of the new system, and vice versa.
1. Change the order of #\vec{a}_1 ,\ldots ,\vec{a}_m#: we use the commutativity of vector addition to rewrite the linear combination in the new ordering. For #m=2#, for example, \[ \lambda_1\cdot\vec{a}_1+\lambda_2\cdot\vec{a}_2=\lambda_2\cdot\vec{a}_2+\lambda_1\cdot\vec{a}_1\]
2. Replacement of #\vec{a}_i# by #\mu\cdot \vec{a}_i# with #\mu\neq0#: we rewrite the linear combination \[\lambda_1\cdot\vec{a}_1+\cdots+\lambda_i\cdot\vec{a}_i +\cdots+\lambda_m\cdot\vec{a}_m \ \text{ as }\ \lambda_1\cdot\vec{a}_1+\cdots+\frac{\lambda_i}{\mu}\cdot(\mu\cdot\vec{a}_i) +\cdots+\lambda_m\cdot\vec{a}_m\] This proves \[\linspan{\vec{a}_1,\ldots,\vec{a}_i,\ldots,\vec{a}_m}\subseteq\linspan{\vec{a}_1,\ldots,\mu\cdot\vec{a}_i ,\ldots,\vec{a}_m}\] Conversely, we can rewrite \[ \lambda_1\cdot\vec{a}_1+\cdots+\lambda_i\cdot(\mu\cdot\vec{a}_m) +\cdots+\lambda_m\cdot\vec{a}_m\] as \[ \lambda_1\cdot\vec{a}_1+\cdots+(\lambda_i\cdot\mu)\cdot\vec{a}_m +\cdots+\lambda_m\cdot\vec{a}_m\] This proves \[\linspan{\vec{a}_1,\ldots,\vec{a}_i,\ldots,\vec{a}_m} \supseteq \linspan{\vec{a}_1,\ldots,\mu\cdot\vec{a}_i ,\ldots,\vec{a}_m}\] From these two inclusions together it follows that the two spans are equal.
3. Replacement of a vector #\vec{a}_i# by #\vec{a}_i +\mu\cdot \vec{a}_j# with #j\neq i#: by suitably changing the ordering of the vectors we only need to show this in the case where we replace #\vec{a}_1# by the vector #\vec{a}_1+\mu\cdot\vec{a}_2#, thus for #i=1# and #j=2#. The linear combination # \lambda_1\cdot\vec{a}_1+\lambda_2\cdot\vec{a}_2 +\cdots+\lambda_m\cdot\vec{a}_m# can be rewritten as # \lambda_1\cdot\left(\vec{a}_1+\mu\cdot\vec{a}_2\right)+\left(\lambda_2-\lambda_1\cdot\mu\right)\cdot\vec{a}_2 +\cdots+\lambda_m\cdot\vec{a}_m#. This proves \[ \linspan{\vec{a}_1 +\mu\cdot \vec{a}_2 ,\vec{a}_2 ,\ldots ,\vec{a}_m}\supseteq
\linspan{\vec{a}_1 ,\ldots ,\vec{a}_m} \] Conversely, # \lambda_1\cdot\left(\vec{a}_1+\mu\cdot \vec{a}_2\right)+ \lambda_2\cdot\vec{a}_2+\cdots+\lambda_m\cdot\vec{a}_m# can be rewritten as # \lambda_1\cdot\vec{a}_1+\left(\lambda_2+\lambda_1\cdot\mu\right)\cdot \vec{a}_2+\cdots+\lambda_m\cdot\vec{a}_m#. This proves \[ \linspan{\vec{a}_1 +\mu\cdot \vec{a}_2 ,\vec{a}_2 ,\ldots ,\vec{a}_m}\subseteq
\linspan{\vec{a}_1 ,\ldots ,\vec{a}_m } \] The two inclusions together imply equality of the two spans.
4 and 5. Insertion and deletion of the zero vector is permitted: the scalar of the zero vector that is added to a linear combination of #\vec{a}_1 ,\ldots ,\vec{a}_m# does not matter because the zero vector contribution is nil, so \[ \linspan{\vec{a}_1 ,\ldots ,\vec{a}_m} = \linspan{\vec{a}_1 ,\ldots ,\vec{a}_m, \vec{0}}\]
6 and 7. If #\vec{a}_j# is a linear combination of #\vec{a}_1,\ldots,\vec{a}_{j-1},\vec{a}_{j+1},\ldots,\vec{a}_m#, then # \linspan{\vec{a}_1 ,\ldots ,\vec{a}_m}# coincides with #\linspan{\vec{a},\ldots,\vec{a}_{j-1},\vec{a}_{j+1},\ldots,\vec{a}_m}#: in view of part 1 we may suppose #j=m#. Now, if #\vec{a}_m=\mu_1\cdot \vec{a}_1 + \cdots +\mu_{m-1} \cdot\vec{a}_{m-1}# for certain scalars #\mu_1,\ldots,\mu_{m-1}#, we find
\[
\begin{array}{rcl}\linspan{\vec{a}_1 ,\ldots ,\vec{a}_{m-1}} &= &\linspan{\vec{a}_1 ,\ldots ,\vec{a}_{m-1} , \vec{0}}\\&&\phantom{xx}\color{blue}{\text{part 4}}\\
&= &\linspan{\vec{a}_1 ,\ldots ,\vec{a}_{m-1} , \vec{0}+\mu_1\cdot\vec{a}_1 } \\&&\phantom{xx}\color{blue}{\text{part 3}}\\
& \vdots \\
& =& \linspan{\vec{a}_1 ,\ldots ,\vec{a}_{m-1} ,\mu_1\cdot\vec{a}_1 +\cdots +\mu_{m-1}\cdot\vec{a}_{m-1} }\\&&\phantom{xx}\color{blue}{\text{part 3}}\\
& =& \linspan{\vec{a}_1 ,\ldots ,\vec{a}_{m-1} ,\vec{a}_m }
\end{array}\]
By putting together some parts, we arrive at a way to change the set of spanning vectors which is important for the sequel:
Let #j# be a natural number in #\ivcc{1}{n}#. If #\vec{u}_1,\ldots ,\vec{u}_m# and #\vec{u}# are vectors with #\vec{u}=\lambda_1\vec{u}_1+\cdots+\lambda_m\vec{u}_m#, where #\lambda_1,\ldots,\lambda_m# are scalars, such that #\lambda_j\neq 0#, then
\[
\linspan{\vec{u}_1,\ldots ,\vec{u}_m}=\linspan{\vec{u}_1,\ldots\vec{u}_{j-1},\vec{u},\vec{u}_{j+1},\ldots ,\vec{u}_m}
\]
This theorem states that if a vector #\vec{u}\in \linspan{\vec{u}_1,\ldots ,\vec{u}_m}# can be written as a linear combination of #\vec{u}_1,\ldots,\vec{u}_m#, where the scalar of #\vec{u}_j# is distinct from #0#, we can replace the vector #\vec{u}_j# by #\vec{u}# without altering the spanned subspace.
The name Exchange Theorem refers to the exchange of #\vec{u}# with #\vec{u}_j#.
First multiply #\vec{u}_j# from the set of vectors by #\lambda_j# (which is distinct from #0#; use part 2), and then add to the #j#-th vector successively #\lambda_1\vec{u}_1,\ldots ,\lambda_{j-1}\vec{u}_{j-1}#, #\lambda_{j+1}\vec{u}_{j+1},\ldots ,\lambda_m\vec{u}_m# (use part 3 repeatedly). As a result, the spanned space does not change, so \[\linspan{\vec{u}_1,\ldots ,\vec{u}_j,\ldots ,\vec{u}_m}=\linspan{\vec{u}_1,\ldots ,\vec{u}_{j-1},\vec{u},\vec{u}_{j+1},\ldots ,\vec{u}_m}\]
By repeated application of the rules above we can see that (regardless of the vector space in which we operate)
\[
\begin{array}{rl}
\linspan{\vec{a}+2\vec{b}, \vec{a}-\vec{b},\vec{a}+\vec{b}}& =
\linspan{\vec{a}+2\vec{b}, \vec{a}-\vec{b},\vec{a}+\vec{b}+(\vec{a}-\vec{b})}\\
& = \linspan{\vec{a}+2\vec{b}, \vec{a}-\vec{b},2\vec{a}}\\
& =\linspan{\vec{a}+2\vec{b}-\frac12\cdot 2\vec{a}, \vec{a}-\vec{b}-\frac12\cdot 2\vec{a},2\vec{a}}\\
& =\linspan{2\vec{b}, -\vec{b},2\vec{a}}= \linspan{\vec{b}, -\vec{b},\vec{a}}\\
& =
\linspan{\vec{b}, -\vec{b}+\vec{b},\vec{a}}=\linspan{\vec{b}, \vec{0},\vec{a}}\\
& =\linspan{\vec{a}, \vec{b}}
\end{array}
\]
The required number of spanning vectors is found to have been reduced to two. This process can be systematically performed with the aid of matrices. As with linear equations, there is no need to explicitly write #\vec{a}# and #\vec{b}# at every step: our primary concern are the coefficients. Therefore, we collect the coefficients in rows of a matrix; the applicable operations are known as row reduction operations that help us to find the reduced echelon form:
\[
\hbox{reduce} \quad
\left(\begin{array}{ccc}
1 & 2 \\ 1 & -1 \\ 1 & 1
\end{array}\right)
\quad \hbox{to reduced echelon form}\quad
\left(\begin{array}{ccc}
1 & 0 \\ 0 & 1 \\ 0 & 0
\end{array}\right)
\]
Finally, we interpret the result as #\linspan{\vec{a}+2\vec{b}, \vec{a}-\vec{b},\vec{a}+\vec{b}} =\linspan{\vec{a},\vec{b}}#. The rules concerning addition and removal of vectors are not used in row reduction but only occur when we ignore the null rows.
The intermediate steps involving the matrices only serve to organize our calculations and provide an application of row reduction operations with matrices.
Let #P_5# be the vector space of polynomials in #x# of degree at most #5#, and write
\[\begin{array}{rcl}p_1(x)&=&x^5+x^4+2\cdot x^3+4\cdot x^2+2\cdot x\\
p_2(x)&=&-x^5+x^4-x^3+4\cdot x+2\\
p_3(x)&=&-3\cdot x^5-6\cdot x^3-6\cdot x^2\\
p_4(x)&=&-2\cdot x^5+x^4-3\cdot x^3-2\cdot x^2+4\cdot x+2\\ p_5(x)&=&-3\cdot x^4-6\cdot x^2-6\cdot x\end{array}\]
The span #\linspan{p_1(x),p_2(x),p_3(x),p_4(x),p_5(x)}# can also be spanned by three polynomials. Determine a list #\rv{q_1(x),q_2(x),q_3(x)}# of three polynomials #q_1(x)#, #q_2(x)#, #q_3(x)# with that property, that is, such that
\[\linspan{q_1(x),q_2(x),q_3(x)}=\linspan{p_1(x),p_2(x),p_3(x),p_4(x),p_5(x)}\]
#\rv{x^5+2\cdot x^3+2\cdot x^2,x^4+2\cdot x^2+2\cdot x,x^3+2\cdot x+2}#
We use the fact that #P_5# is spanned by the zeroth until the fifth power of #x#: \[ P_5=\linspan{1,x,x^2,x^3,x^4,x^5}\] We register the coefficients of the powers of #x# in the polynomials #p_1(x),\ldots,p_5(x)# as linear combinations of #1#, #x,\ldots,x^5# in #(5\times 6)#-matrix #M#: the number in the #i#-th row and the #j#-th column of #M# is the coefficient of #x^{6-j}# in #p_i(x)#. This leads to the matrix
\[M=\matrix{1 & 1 & 2 & 4 & 2 & 0 \\ -1 & 1 & -1 & 0 & 4 & 2 \\ -3 & 0 & -6 & -6 & 0 & 0 \\ -2 & 1 & -3 & -2 & 4 & 2 \\ 0 & -3 & 0 & -6 & -6 & 0 \\ }\] By use of row reduction we write this matrix in
echelon form: \[N= \matrix{1 & 0 & 2 & 2 & 0 & 0 \\ 0 & 1 & 0 & 2 & 2 & 0 \\ 0 & 0 & 1 & 0 & 2 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ }\] Only the first three rows contribute to the spans. Translated back to polynomials, this gives:
\[\begin{array}{rl}&x^5+2\cdot x^3+2\cdot x^2\\&x^4+2\cdot x^2+2\cdot x\\&x^3+2\cdot x+2\end{array}\] Thus we find the answer \[\rv{x^5+2\cdot x^3+2\cdot x^2,x^4+2\cdot x^2+2\cdot x,x^3+2\cdot x+2}\]