Proof.
If \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly dependent, then there exists \(\alpha \neq 0\) such that
\begin{equation}
{\mathbf v}_1 = \alpha {\mathbf v}_2.\tag{3.2.3}
\end{equation}
Multiplying both sides of this equation by \(A\text{,}\) we have
\begin{equation}
\lambda_1 {\mathbf v}_1 = A{\mathbf v}_1 = \alpha A {\mathbf v}_2 = \alpha \lambda_2 {\mathbf v}_2.\tag{3.2.4}
\end{equation}
On the other hand, we obtain
\begin{equation}
\lambda_2 {\mathbf v}_1 = \alpha \lambda_2 {\mathbf v}_2\tag{3.2.5}
\end{equation}
if we multiply both sides of
(3.2.3) by
\(\lambda_2\text{.}\) Using
(3.2.4) and
(3.2.5), we can conclude that
\begin{equation*}
(\lambda_1 - \lambda_2) {\mathbf v}_1 = \alpha(\lambda_2 - \lambda_2 ){\mathbf v}_2 = 0 {\mathbf v}_2 = {\mathbf 0}.
\end{equation*}
However, this contradicts the assumption that \(\lambda_1\) and \(\lambda_2\) are distinct.
We can now proceed to the proof of the theorem. Suppose that we have a linear system \({\mathbf x}' = A{\mathbf x}\) such that \(A\) has a pair of distinct real eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) with associated eigenvectors \({\mathbf v}_1\) and \({\mathbf v}_2\text{.}\) By the Principle of Superposition, we know that
\begin{equation*}
{\mathbf x}(t) = c_1 e^{\lambda_1 t} {\mathbf v}_1 + c_2 e^{\lambda_2 t} {\mathbf v}_2.
\end{equation*}
is a solution to the linear system
\({\mathbf x}' = A {\mathbf x}\text{.}\) To show that this is the general solution, we must show that we can choose
\(c_1\) and
\(c_2\) to satisfy a given initial condition
\({\mathbf x}_0 = {\mathbf x}(0) = (x_0, y_0)\text{.}\) By
Lemma 3.2.7, we know that
\({\mathbf v}_1\) and
\({\mathbf v}_2\) form a basis for
\({\mathbb R}^2\text{.}\) That is, we can write
\({\mathbf x}_0\) as a linear combination of
\({\mathbf v}_1\) and
\({\mathbf v}_2\text{.}\) In other words, we can find
\(c_1\) and
\(c_2\) such that
\begin{equation*}
{\mathbf x}_0 = {\mathbf x}(0) = c_1 {\mathbf v}_1 + c_2 {\mathbf v}_2.
\end{equation*}
It remains to show that \({\mathbf x}(t) = c_1 e^{\lambda_1 t} {\mathbf v}_1 + c_2 e^{\lambda_2 t} {\mathbf v}_2\) is the unique solution to the system
\begin{align*}
{\mathbf x}'(t) & = A {\mathbf x}(t),\\
{\mathbf x}(0) & = {\mathbf x}_0.
\end{align*}
Suppose that there is another solution \({\mathbf y}(t)\) such that \({\mathbf y}(0) = {\mathbf x}_0\text{.}\) Then we can write
\begin{equation*}
{\mathbf y}(t) = f(t) {\mathbf v}_1 + g(t) {\mathbf v}_2,
\end{equation*}
where
\begin{align*}
f(0) & = c_1,\\
g(0) & = c_2.
\end{align*}
Since \({\mathbf y}(t)\) is a solution to our system of equations, we know that
\begin{equation*}
A {\mathbf y}(t) = {\mathbf y}'(t) = f'(t) {\mathbf v}_1 + g'(t) {\mathbf v}_2.
\end{equation*}
On the other hand,
\begin{equation*}
A {\mathbf y}(t) = f(t) A {\mathbf v}_1 + g(t) A {\mathbf v}_2 = \lambda_1 f(t) {\mathbf v}_1 + \lambda_2 g(t) {\mathbf v}_2.
\end{equation*}
Consequently, we have two first-order initial value problems,
\begin{align*}
f'(t) & = \lambda_1 f(t),\\
f(0) & = c_1,
\end{align*}
and
\begin{align*}
g'(t) & = \lambda_2 g(t),\\
g(0) & = c_2.
\end{align*}
The solutions of these initial value problems are
\begin{align*}
f(t) & = c_1 e^{\lambda_1 t},\\
g(t) & = c_2 e^{\lambda_2 t},
\end{align*}
respectively. Thus, \({\mathbf y}(t) = {\mathbf x}(t)\text{,}\) and proof our theorem is complete.