In order to be able to compare vector spaces, we use mappings between vector spaces that respect the vector space structure, the so-called linear maps. We begin with the definition of a map.
Let #X# and #Y# be two (possibly the same) sets. A map or mapping #G:X\rightarrow Y# assigns to each element #x# of #X# exactly one element #G(x)#, sometimes also noted as #Gx#, of #Y#. An explicit expression for #G(x)# is also called the mapping rule. The element #G(x)# of #Y# is referred to as the image of #x# under #G#. The set #X# is called the domain and #Y# the codomain or range of #G#. For an element #y# of #Y#, the (full) preimage of #y# under #G# is the set of all elements #x# of #X# satisfying #G(x) = y#.
A map #G# with domain #X# and codomain #Y# is also indicated by #x\mapsto G(x)#, where #G(x)# is replaced by a mapping rule.
For each vector space #V# the map #I: V \rightarrow V# given by #I(\vec{v}) =\vec{v}# is a map, called the identity map, or identity on #V#. If we want to make the vector space #V# explicit, then we write it #I_V# rather than #I#.
If both #V# and #W# are real vector spaces, then the map #O: V \rightarrow W# given by #O(\vec{v})=\vec{0}# is a map, the so-called zero map. We also denote this map by #0# and sometimes by #0_V#.
If #X = Y=\mathbb{R}#, then a map #X\to Y# is no different than a real function.
Differentiating polynomials in #x# can be described as the map #P\to P# determined by #p(x)\mapsto \dfrac{\dd}{\dd x}p(x)#.
Let #V# and #W# be two (possibly the same) vector spaces. A map #L:V\rightarrow W# is called linear if, for all vectors #\vec{x},\vec{y}\in V# and all numbers #\alpha#, we have
\[
\begin{array}{rclr}L (\vec{x}+\vec{y})&=&L\vec{x}+L\vec{y}&\phantom{xx}\color{blue}{\text{sum rule}}\\
L (\alpha \vec{x})&=&\alpha( L\vec{x})&\phantom{xx}\color{blue}{\text{scalar rule}}
\end{array}
\]
If #V# and #W# are arbitrary vector spaces, then the identity map #I_V# and the zero map #O:V\to W# are linear maps.
If #V = \mathbb{R}#, then multiplication by #7# (or any other number) is a linear map #V\to V#. Multiplication by #0# is the zero map and multiplication by #1# is the identity map.
An equivalent definition of linearity for #L# is: for all #\vec{x},\vec{y}\in V# and all numbers #\alpha#, #\beta#, we have
\[
L (\alpha \vec{x}+\beta\vec{y})=\alpha L\vec{x}+\beta L\vec{y}
\]
The term #\alpha L\vec{x}# can be interpreted in two ways by placing brackets:
- #\alpha(L\vec{x})#: the scalar product of the vector #L\vec{x}# by #\alpha#
- #(\alpha L)\vec{x}#: the image of the vector #\vec{x}# under the map #\alpha L# defined by #(\alpha L)\vec{x} =\alpha (L\vec{x})#
The second interpretation uses the first expression. So the meaning of #\alpha L\vec{x}# does not depend on the way the parentheses occur in this expression.
A bijective linear mapping is also called an isomorphism. We will later discuss the notion of bijectivity, which is defined for each map. If such an isomorphism #L:V\rightarrow W# exists, then #V# and #W# are called isomorphic. Two isomorphic vector spaces are essentially the same. By this we mean that the names of vectors and the like may vary, but that after application of the bijective map (read: the change of name), one vector space is identical to the other.
Later we will see that any finite-dimensional vector space of dimension #n# is isomorphic to a coordinate space. This means that, after appropriate translation, the vector space can be viewed as being #\mathbb{R}^n#.
By repeated application of the definition we see that the image of a linear combination is the same linear combination of the image vectors:
A map #L:V\rightarrow W# is linear if and only if, for all natural numbers #n#, all vectors #\vec{x}_1,\ldots,\vec{x}_n# in #V# and all numbers #\alpha_1,\ldots ,\alpha_n#,
\[
L \left(\,\sum_{i=1}^n\alpha_i\vec{x}_i\,\right)=\sum_{i=1}^n\alpha_iL\vec{x}_i
\]
If #n=1#, then the equality says that #L \alpha_1\vec{x}_1=\alpha_1L\vec{x}_1#. This is the second line of the definition of a linear map.
If #n=2#, then the equality says \[L (\alpha_1\vec{x}_1+\alpha_2\vec{x}_2)=\alpha_1L\vec{x}_1+\alpha_2L\vec{x}_2\]
After renaming of the variables, this is the second interpretation of the definition of a linear map.
So, if the equality in the statement is true for all natural numbers #n#, then #L# is linear.
Suppose that #L# is a linear map. In that case, as we saw above, the equality holds for #n=1# and #n=2#. To complete the proof, we show by induction on #n# that equality holds for all integers #n\ge 1#. For this purpose, let #n\gt 2#. Now
\[\begin{array}{rcl} L \left(\,\sum_{i=1}^n\alpha_i\vec{x}_i\,\right)&=& L \left(\sum_{i=1}^{n-1}\alpha_i\vec{x}_i+\alpha_n\vec{x}_n\right)\\&&\phantom{xx}\color{blue}{\text{terms rearranged}}\\ &=& L \left(\,\sum_{i=1}^{n-1}\alpha_i\vec{x}_i\right)+L\left(\alpha_n\vec{x}_n\right)\\ &&\phantom{xx}\color{blue}{\text{sum rule}}\\&=&\sum_{i=1}^{n-1}\alpha_iL\vec{x}_i+\alpha_nL\vec{x}_n\\ &&\phantom{xx}\color{blue}{\text{induction hypothesis and scalar rule}}\\ &=& \sum_{i=1}^{n}\alpha_iL\vec{x}_i\\ &&\phantom{xx}\color{blue}{\text{terms rearranged}}\end{array}\]
Linear maps occur very frequently in practice, even though they are not always immediately recognized as such. The following examples illustrate this.
Differentiation is a linear map.
The
sum rule and scalar rule are basic properties of the derivative, which we indicate by #D(f)# for a differentiable function #f#:
\[
\begin{array}{rcl}
D(f+g) & =&D(f)+D(g) \\
D(\alpha\cdot f) & =&\alpha\cdot D(f)
\end{array}
\]