5.2: Linear Independence

We are now going to define the notion of linear independence of a list of vectors. This concept will be extremely important in the sections that follow, and especially when we introduce bases and the dimension of a vector space.

Definition 5.2.1: linearly independent Vectors A list of vectors \((v_1,\ldots,v_m)\) is called linearly independent if the only solution for \(a_1,\ldots,a_m\in\mathbb\) to the equation \[ a_1 v_1 + \cdots + a_m v_m = 0 \] is \(a_1=\cdots = a_m=0\). In other words, the zero vector can only trivially be written as a linear combination of \((v_1,\ldots,v_m)\).

Definition 5.2.2: Linearly dependent Vectors A list of vectors \((v_1,\ldots,v_m)\) is called linearly dependent if it is not linearly independent. That is, \((v_1,\ldots,v_m)\) is linear dependent if there exist \(a_1,\ldots,a_m\in \mathbb\), not all zero, such that \[ a_1 v_1 + \cdots + a_m v_m = 0.\]

Example \(\PageIndex<1>\): The vectors \((e_1,\ldots,e_m)\) of Example 5.1.4 are linearly independent. To see this, note that the only solution to the vector equation \[ 0 = a_1 e_1 + \cdots + a_m e_m = (a_1,\ldots,a_m) \] is \(a_1=\cdots=a_m=0\). Alternatively, we can reinterpret this vector equation as the homogeneous linear system \[ \left. \begin a_ <1>& & ~ & ~ & = & 0 \\ ~ & a_ & ~ & ~ & = & 0 \\ ~ & ~ & \ddots & ~ & \vdots & \vdots \\ ~ & ~ & ~ & a_ & = & 0 \end \right\>, \] which clearly has only the trivial solution. (See Section A.3.2 for the appropriate definitions.)

Example \(\PageIndex<2>\): The vectors \(v_1=(1,1,1),v_2=(0,1,-1)\), and \(v_3=(1,2,0)\) are linearly dependent. To see this, we need to consider the vector equation \[ \begin a_1 v_1 + a_2 v_2 + a_3 v_3 & = & a_1 (1,1,1) + a_2 (0,1,-1) + a_3 (1,2,0)\\ & = & (a_1+a_3,a_1+a_2+2a_3,a_1-a_2) = (0,0,0). \end \] Solving for \(a_1,a_2\), and \(a_3\), we see, for example, that \((a_1,a_2,a_3) = (1,1,-1)\) is a nonzero solution. Alternatively, we can reinterpret this vector equation as the homogeneous linear system \[ \left. \begin a_ & ~ & ~ & + & a_ & = & 0 \\ a_ & + & a_ <2>& + & 2a_ & = & 0 \\ a_ & - & a_ <2>& ~ & ~ & = & 0 \end \right\>. \] Using the techniques of Section A.3, we see that solving this linear system is equivalent to solving the following linear system: \[ \left. \begin a_ & ~ & ~ & + & a_ & = & 0 \\ ~ & ~ & a_ <2>& + & a_ & = & 0 \end \right\>. \] Note that this new linear system clearly has infinitely many solutions. In particular, the set of all solutions is given by \[ N = \< (a_1,a_2,a_3) \in \mathbb^ \ | \ a_ = a_ <2>= -a_ \> = \Span((1,1,-1)). \]

Example \(\PageIndex<3>\): The vectors \((1,z,\ldots,z^m)\) in the vector space \(\mathbb_m[z]\) are linearly independent. Requiring that \[ a_0 1 + a_1 z +\cdots+ a_m z^m = 0 \] means that the polynomial on the left should be zero for all \(z\in \mathbb\). This is only possible for \(a_0=a_1=\cdots=a_m=0\). An important consequence of the notion of linear independence is the fact that any vector in the span of a given list of linearly independent vectors can be uniquely written as a linear combination.

Lemma 5.2.6 The list of vectors \((v_1,\ldots,v_m)\) is linearly independent if and only if every \(v\in\Span(v_1,\ldots,v_m)\) can be uniquely written as a linear combination of \((v_1,\ldots,v_m)\).

Proof \(( "\Longrightarrow" )\) Assume that \((v_1,\ldots,v_m)\) is a linearly independent list of vectors. Suppose there are two ways of writing \(v\in\Span(v_1,\ldots,v_m)\) as a linear combination of the \(v_i\): \[ v=a_1v_1 + \cdots a_m v_m,\\ v=a^<'>_1v_1 + \cdots a^<'>_m v_m. \] Subtracting the two equations yields \(0=(a_1-a_1')v_1 + \cdots + (a_m-a_m')v_m\). Since \((v_1,\ldots,v_m)\) is linearly independent, the only solution to this equation is \(a_1-a_1'=0,\ldots,a_m-a_m'=0\), or equivalently \(a_1=a_1',\ldots,a_m=a_m'\). \(( "\Longleftarrow" )\) Now assume that, for every \(v \in \Span(v_1,\ldots,v_m)\) , there are unique \(a_1,\ldots,a_m \in \mathbb\) such that \[ v = a_1 v_1 + \cdots + a_m v_m. \] This implies, in particular, that the only way the zero vector \(v=0\) can be written as a linear combination of \(v_1,\ldots,v_m\) is with \(a_1=\cdots=a_m=0\). This shows that \((v_1,\ldots,v_m)\) are linearly independent. \(\square\)

It is clear that if \((v_1,\ldots,v_m)\) is a list of linearly independent vectors, then the list \((v_1,\ldots,v_)\) is also linearly independent. For the next lemma, we introduce the following notation: If we want to drop a vector \(v_j\) from a given list \((v_1,\ldots,v_m)\) of vectors, then we indicate the dropped vector by a hat. I.e., we write \[ (v_1,\ldots,\hat_j,\ldots,v_m) = (v_1,\ldots,v_, v_,\ldots,v_m). \]

  1. \(v_j\in \Span(v_1,\ldots, v_)\).
  2. If \(v_j\) is removed from \((v_1,\ldots,v_m)\), then \(\Span(v_1,\ldots,\hat_j,\ldots,v_m) = \Span(v_1,\ldots,v_m)\).

Since \((v_1,\ldots,v_m)\) is linearly dependent there exist \(a_1,\ldots,a_m\in \mathbb\) not all zero such that \(a_1v_1+\cdots+ a_mv_m =0\). Since by assumption \(v_1\neq 0\), not all of \(a_2,\ldots,a_m\) can be zero. Let \(j\in\<2,\ldots,m\>\) be largest such that \(a_j\neq 0\). Then we have

which implies Part~1.

Let \(v\in\Span(v_1,\ldots,v_m)\). This means, by definition, that there exist scalars \(b_1,\ldots,b_m\in\mathbb\) such that

\[ v = b_1 v_1 + \cdots + b_m v_m. \]

The vector \(v_j\) that we determined in Part 1 can be replaced by Equation \ref) so that \(v\) is written as a linear combination of \((v_1,\ldots,\hat_j,\ldots,v_m)\). Hence, \(\Span(v_1,\ldots,\hat_j,\ldots,v_m) = \Span(v_1,\ldots,v_m)\).

The list \((v_1,v_2,v_3)=((1,1),(1,2),(1,0))\) of vectors spans \(\mathbb^2\).

To see this, take any vector \(v=(x,y)\in \mathbb^2\). We want to show that \(v\) can be written as a linear combination of \((1,1),(1,2),(1,0)\), i.e., that there exist scalars \(a_, a_, a_ \in \mathbb\) such that

\[ v = a_1 (1,1) + a_2 (1,2) + a_3 (1,0),\]

or equivalently that

Clearly \(a_1=y\), \(a_2=0\), and \(a_3=x-y\) form a solution for any choice of \(x, y\in \mathbb\), and so \(\mathbb^2=\Span((1,1),(1,2),(1,0))\). However, note that

which shows that the list \(((1,1),(1,2),(1,0))\) is linearly dependent.

The Linear Dependence

Lemma 5.2.7 thus states that one of the vectors can be dropped from \(((1,1),(1,2),(1,0))\) and that the resulting list of vectors will still span \(\mathbb^2\). Indeed, by Equation \ref,

\[ v_3 = (1,0) = 2(1,1)-(1,2) = 2v_1-v_2, \]

The next result shows that linearly independent lists of vectors that span a finite-dimensional vector space are the smallest possible spanning sets.

Let \(V\) be a finite-dimensional vector space. Suppose that \((v_1,\ldots,v_m)\) is a linearly independent list of vectors that spans \(V\), and let \((w_1,\ldots,w_n)\) be any list that spans \(V\). Then \(m\le n\).

The proof uses the following iterative procedure: start with an arbitrary list of vectors \(\mathcal_0=(w_1,\ldots,w_n)\) such that \(V = \Span(\mathcal_0)\). At the \(k^>\) step of the procedure, we construct a new list \(\mathcal_k\) by replacing some vector \(w_\) by the vector \(v_k\) such that \(\mathcal_k\) still spans \(V\). Repeating this for all \(v_k\) then produces a new list \(\mathcal_m\) of length \(n\) that contains each of \(v_1,\ldots,v_m\), which then proves that \(m\le n\). Let us now discuss each step in this procedure in detail.

Step 1. Since \((w_1,\ldots,w_n)\) spans \(V\), adding a new vector to the list makes the new list linearly dependent. Hence \((v_1,w_1,\ldots,w_n)\) is linearly dependent. By Lemma 5.2.7, there exists an index \(j_1\) such that

Hence \(\mathcal_1=(v_1,w_1,\ldots,\hat_,\ldots,w_n)\) spans \(V\). In this step, we added the vector \(v_1\) and removed the vector \(w_\) from \(\mathcal_0\).

Step \(k\). Suppose that we already added \(v_1,\ldots,v_\) to our spanning list and removed the vectors \(w_,\ldots,w_\). It is impossible that we have reached the situation where all of the vectors \(w_1,\ldots, w_n\) have been removed from the spanning list at this step if \(k\leq m\) because then we would have \(V=\Span(v_1,\ldots, v_)\) which would allow \(v_k\) to be expressed as a linear combination of \(v_1,\ldots,v_\) (in contradiction with the assumption of linear independence of \(v_1,\ldots,v_n\)).

Now, call the list reached at this step \(\mathcal_\), and note that \(V = \Span(\mathcal_)\). Add the vector \(v_k\) to \(\mathcal_\). By the same arguments as before, adjoining the extra vector \(v_k\) to the spanning list \(\mathcal_\) yields a list of linearly dependent vectors. Hence, by Lemma 5.2.7, there exists an index \(j_k\) such that \(\mathcal_\) with \(v_k\) added and \(w_\) removed still spans \(V\). The fact that \((v_1,\ldots,v_k)\) is linearly independent ensures that the vector removed is indeed among the \(w_j\). Call the new list \(\mathcal_k\), and note that \(V = \Span(\mathcal_)\).

The final list \(\mathcal_m\) is \(\mathcal_0\) but with each \(v_1,\ldots,v_m\) added and each \(w_,\ldots, w_\) removed. Moreover, note that \(\mathcal_\) has length \(n\) and still spans \(V\). It follows that \(m\le n\).

Contributors

Both hardbound and softbound versions of this textbook are available online at WorldScientific.com.

This page titled 5.2: Linear Independence is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.