guide:C0fe273f87: Difference between revisions

From Stochiki
No edit summary
 
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\ex}[1]{\item }
\newcommand{\sx}{\item}
\newcommand{\x}{\sx}
\newcommand{\sxlab}[1]{}
\newcommand{\xlab}{\sxlab}
\newcommand{\prov}[1] {\quad #1}
\newcommand{\provx}[1] {\quad \mbox{#1}}
\newcommand{\intext}[1]{\quad \mbox{#1} \quad}
\newcommand{\R}{\mathrm{\bf R}}
\newcommand{\Q}{\mathrm{\bf Q}}
\newcommand{\Z}{\mathrm{\bf Z}}
\newcommand{\C}{\mathrm{\bf C}}
\newcommand{\dt}{\textbf}
\newcommand{\goesto}{\rightarrow}
\newcommand{\ddxof}[1]{\frac{d #1}{d x}}
\newcommand{\ddx}{\frac{d}{dx}}
\newcommand{\ddt}{\frac{d}{dt}}
\newcommand{\dydx}{\ddxof y}
\newcommand{\nxder}[3]{\frac{d^{#1}{#2}}{d{#3}^{#1}}}
\newcommand{\deriv}[2]{\frac{d^{#1}{#2}}{dx^{#1}}}
\newcommand{\dist}{\mathrm{distance}}
\newcommand{\arccot}{\mathrm{arccot\:}}
\newcommand{\arccsc}{\mathrm{arccsc\:}}
\newcommand{\arcsec}{\mathrm{arcsec\:}}
\newcommand{\arctanh}{\mathrm{arctanh\:}}
\newcommand{\arcsinh}{\mathrm{arcsinh\:}}
\newcommand{\arccosh}{\mathrm{arccosh\:}}
\newcommand{\sech}{\mathrm{sech\:}}
\newcommand{\csch}{\mathrm{csch\:}}
\newcommand{\conj}[1]{\overline{#1}}
\newcommand{\mathds}{\mathbb}
</math></div>


In this section we shall show how to obtain the general solution of any
differential equation of the form
<span id{{=}}"eq6.8.1"/>
<math display="block">
\begin{equation}
\frac{d^{2}y}{dx^2} + a \frac{dy}{dx}  + by = 0,
\label{eq6.8.1}
\end{equation}
</math>
where <math>a</math> and <math>b</math> are real constants. Differential equations of this type occur
frequently in mechanics and also in the theory of electric circuits. Equation (1) is a '''second-
order differential equation,''' since it contains the second derivative <math>\frac{d^{2}y}{dx^2}</math> but no
higher derivative.  It is called '''linear''' because each one of <math>y, \frac{dy}{dx}</math>, and <math>\frac{d^{2}y}{dx^2}</math> occurs, if at all, to the first power. That is, if we set <math>\frac{dy}{dx} = z</math>
and <math>\frac{d^{2}y}{dx^2} = w</math>, then (1) becomes <math>w + az + by = 0</math>, and the left side is a
linear polynomial, or polynomial of first degree, in <math>w, z</math>, and <math>y</math>. A secondorder linear
differential equation more general than (1) is
<math display="block">
\frac{d^{2}y}{dx^2} + f(x) \frac{dy}{dx} + g(x)y  = h(x),
</math>
where <math>f</math>, <math>g</math>, and <math>h</math> are given functions of <math>x</math>. Equation (1) is a special case,
called '''homogeneous''', because <math>h</math> is the zero function, and said to have '''constant coefilcients,''' since  <math>f</math> and <math>g</math> are constant functions. Thus the topic of this section becomes: the study of second-order, linear, homogeneous differential equations with constant coefficients.
An important and easily proved property of differential equations of this kind is the following:
{{proofcard|Theorem|theorem-1|If <math>y_1</math> and <math>y_2</math> are any two solutions of the differential equation (1), and if <math>c_1</math>
and <math>c_2</math> are any two real numbers, then <math>c_{1}y_{1} + c_{2}y_{2}</math> is also a solution.
|The proof uses only the elementary properties of the derivative. We know that
<math display="block">
\frac{d}{dx}(c_{1}y_{1} + c_{2}y_{2}) = c_{1} \frac{dy_1}{dx} + c_{2} \frac{dy_2}{dx}.
</math>
Hence
<math display="block">
\begin{eqnarray*}
\frac{d^2}{{dx}^2} (c_{1}y_{1} + c_{2}y_{2}) &=& \frac{d}{dx}\Bigl(c_{1} \frac{dy_1}{dx} + c_{2} \frac{dy_2}{dx}\Bigr) \\
&=& c_{1} \frac{d^{2}y_1}{{dx}^2} + c_{2} \frac{d^{2}y_2}{{dx}^2}.
\end{eqnarray*}
</math>
To test whether or not <math>c_{1}y_{1} + c_{2}y_{2}</math> is a solution,
we substitute it for <math>y</math> in the differential equation:
<math display="block">
\begin{eqnarray*}
&& \frac{d^{2}}{dx^2} (c_{1}y_{1} + c_{2}y_{2}) + a \frac{d}{dx} (c_{1}y_{1} + c_{2}y_{2}) + b(c_{1}y_{1} + c_{2}y_{2})\\
&=& c_{1} \frac{d^{2}y_1}{dx^2} + c_{2} \frac{d^{2}y_2}{dx^2} + ac_{1} \frac{dy_1}{dx} + ac_{2} \frac{dy_2}{ dx} + bc_{1}y_{1} + bc_{2}y_{2}\\
&=& c_{1}\Bigl(\frac{d^{2}y_1}{dx^2} + a \frac{dy_1}{dx} + by_1\Bigr)
    + c_{2}\Bigl(\frac{d^{2}y_2}{dx^2} + a \frac{dy_2}{dx} + by_2\Bigr).
\end{eqnarray*}
</math>
The expressions in parentheses in the last line are both zero because,
<math>y_1</math> and <math>y_2</math> are by assumption solutions of the differential equation.
Hence the top line is also zero, and so <math>c_{1}y_{1} + c_{2}y_{2}</math> is a solution.
This completes the proof.}}
It follows in particular that the sum and difference of any two solutions of (1) is a solution,
and also that any constant multiple of a solution is again a solution. Finally, note that the
constant function 0 is a solution of (1) for any constants <math>a</math> and <math>b</math>.
In Section 5 of Chapter 5 we found that the general solution of the differential equation <math>\frac{dy}{dx} + ky = 0</math> is the function <math>y = ce^{-kx}</math>, where <math>c</math> is an arbitrary real number.
This differential equation is first-order, linear, homogeneous, and with constant coefficients.
Let us see whether by any chance an
exponential function might also be a solution of the second-order differential equation
<math>\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0</math>. Let <math>y = e^{rx}</math>, where <math>r</math> is any real number.
Then
<math display="block">
\frac{dy}{dx} = re^{rx}, \;\;\; \frac{d^{2}y}{dx^2} = r^{2} e^{rx} .
</math>
Hence
<math display="block">
\begin{eqnarray*}
\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by &=& r^{2}e^{rx} + are^{rx} + be^{rx} \\
&=& (r^2 + ar + b)er^{rx}.
\end{eqnarray*}
</math>
Since <math>e^{rx}</math> is never zero, the right side is zero if and only if <math>r^2 + ar + b = 0</math>.
That is, we have shown that
{{proofcard|Theorem|theorem-2|The function <math>e^{rx}</math> is a solution of <math>\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0</math>
if and only if the real number <math>r</math> is a solution of <math>t^2 + at + b = 0</math>.|}}
The latter equation is called the '''characteristic equation''' of the differential equation.
\medskip
'''Example'''
Consider the differential equation <math>\frac{d^{2}y}{dx^2} - \frac{dy}{dx} - 6y = 0</math>.
Its characteristic equation is <math>t^2 - t - 6 = 0</math>.
Since <math>t^2 - t - 6 = (t - 3)(t + 2)</math>, the two solutions, or roots, are 3 and -2. Hence, by (8.2),
both functions <math>e^{3x}</math> and <math>e^{-2x}</math> are solutions of the differential equation. 
It follows by (8.1) that the function
<math display="block">
y = c_{1} e^{3x} = + c_{2}e^{-2x}
</math>
is a solution for any two real numbers <math>c_1</math> and <math>c_2</math>.
The form of the general solution of the differential equation
<math display="block">
\frac{d^{2}y}{dx^{2}} + a \frac{dy}{dx} + dx + by = 0
</math>
depends on the roots of the characteristic equation <math>t^2 + at + b = 0</math>.
There are three different cases to be considered.
\medskip
''Cuse 1.''  The characteristic equation has distinct real roots. This is the simplest case.
We have
<math display="block">
t^2 + at + b = (t - r_1)(t - r_2),
</math>
where <math>r_1</math>, and <math>r_2</math> are real numbers and <math>r_{1} \neq r_2</math>. Both functions <math>e^{r_{1}x}</math> and <math>e^{r_{2}x}</math> are solutions of the differential equation, and so is any linear combination <math>c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x}</math>. Moreover it can be shown, although we
defer the proof until Chapter 11, that if <math>y</math> is any solution of the differential equation, then
<span id{{=}}"eq6.8.2"/>
<math display="block">
\begin{equation}
y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} ,
\label{eq6.8.2}
\end{equation}
</math>
for some two real numbers <math>c_1</math> and <math>c_2</math>. Hence we say that (2) is the general solution. In Example 1 the function <math>c_{1}e^{3x} + c_{2}e^{-2x}</math> is therefore the general solution
of the differential equation <math>\frac{d^{2}y}{dx^2} - \frac{dy}{dx} - 6y = 0</math>.
\medskip
''Case 2.'' The characteristic equation has complex roots. The roots of <math>t^2 + at + b = 0</math>
are given by the quadratic formula
<math display="block">
r_{1}, r_{2} = \frac{-a \pm \sqrt{a^2 - 4b}}{2}.
</math>
Since <math>a</math> and <math>b</math> are real, <math>r_1</math> and <math>r_2</math> are complex if and only if <math>a^2 - 4b  <  0</math>, which we now assume.  Setting <math>\alpha = - \frac{a}{2}</math> and <math>\beta = \frac{\sqrt{4b - a^2}}{2}</math>,
we have
<math display="block">
r_1 = \alpha + i\beta, \;\;\;  r_2 = \alpha - i\beta.
</math>
Note that <math>r_1</math>, and <math>r_2</math> are complex conjugates of each other.
Motivated by the situation in Case 1, in which <math>r_1</math> and <math>r_2</math> were real, we consider the
complex-valued function <math>c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x}</math>, where we now allow <math>c_1</math>
and <math>c_2</math>, to be complex numbers. We shall show that
{{proofcard|Theorem|theorem-3|If <math>c_1</math> and <math>c_2</math> are any two complex conjugates of each other and if <math>r_1</math> and <math>r_2</math>
are complex solutions of the characteristic equation, then the function
<math display="block">
y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x}
</math>
is real-valued.  Moreover it is a solution of the differential equation (1).
|Let <math>c_1 = \gamma + i\delta</math> and <math>c_2 = \gamma - i\delta</math>. Since <math>r_1 = \alpha + i\beta</math> and <math>r_2 = \alpha - i\beta</math>, we have
<math display="block">
\begin{eqnarray*}
c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} &=& (\gamma + i\delta) e^{(\alpha + i\beta)x} +
(\gamma - i \delta) e^{(\alpha - i\beta)x}\\
&=& e^{\alpha x}[(\gamma + i \delta) e^{i\beta x} + (\gamma - i\delta) e^{-i\beta x}].
\end{eqnarray*}
</math>
Recall that <math>e^{i\beta x} = \cos \beta x + i \sin \beta x</math> and <math>e^{-i\beta x} =
\cos(\beta x) + i sin(-\beta x) = \cos \beta x - i \sin \beta x</math>. Substituting, we get
<math display="block">
\begin{eqnarray*}
c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} &=& e^{\alpha x} [(\gamma + i\delta)(\cos \beta x + i \sin \beta x)
+ (\gamma - i\delta)(\cos \beta x - i \sin \beta x)]\\
&=& e^{\alpha x} (2\gamma \cos \beta x - 2\delta \sin \beta x).
\end{eqnarray*}
</math>
The right side is certainly real-valued, and this proves the first statement of the theorem.
Since <math>\gamma</math> and <math>\delta</math> are arbitrary real numbers, so are <math>2\gamma</math> and <math>-2\delta</math>. We may therefore replace <math>2\gamma</math> by <math>k_1</math> and <math>-2\delta</math> by <math>k_2</math>. We prove the second statement of the theorem by showing that the function
<span id{{=}}"eq6.8.3"/>
<math display="block">
\begin{equation}
y = e^{\alpha x}(k_{1} \cos \beta x + k_{2} \sin \beta x) 
\label{eq6.8.3}
\end{equation}
</math>
is a solution of the differential equation. Let <math>y_1 = e^{\alpha x} \cos \beta x</math> and
<math>y_2 = e^{ax} \sin \beta x</math>. Since <math>y = k_{1}y_1 + k_{2}y_2</math>, it follows by (8.1) that it is enough to show that <math>y_1</math> and <math>y_2</math> separately are solutions of the differential equation. We give the proof for <math>y_1</math> and leave it to the reader to check it for <math>y_2</math>.  By the product rule,
<math display="block">
\begin{eqnarray*}
\frac{dy_1}{dx} &=& \frac{d}{dx} (e^{\alpha x}\cos \beta x)
= \alpha e^{\alpha x} \cos \beta x - \beta e^{\alpha x} \sin \beta x \\
                      &=& e^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x).
\end{eqnarray*}
</math>
Hence
<math display="block">
\begin{eqnarray*}
\frac{d^{2}y_1}{dx^2} &=& \alpha e^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x) +  e^{\alpha x}(-\alpha \beta \sin \beta x - \beta^{2} \cos \beta x) \\
&=& e^{\alpha x} [(\alpha^2  - \beta^2) \cos \beta x - 2\alpha\beta \sin \beta x].
\end{eqnarray*}
</math>
Thus
<math display="block">
\begin{eqnarray*}
\frac{d^{2}y_1}{dx^2} + a \frac{dy_1}{dx} + by_1
&=& e^{\alpha x}[(\alpha^2 - \beta^2) \cos \beta  x - 2\alpha \beta \sin \beta x] \\
& & + ae^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x) + be^{\alpha x} \cos \beta x \\
&=& e^{\alpha x}([(\alpha^2 - \beta^2) + a\alpha + b] \cos \beta x - \beta (2\alpha + a) \sin \beta x).
\end{eqnarray*}
</math>
But, remembering that <math>r_1 = \alpha + i \beta</math> and <math>r_2 = \alpha - i \beta</math> and that
these are the roots of the characteristic equation, we read from the quadratic formula that
<math display="block">
\alpha = - \frac{a}{2}, \;\;\; \beta = \frac{\sqrt{4b - a^2}}{2}.
</math>
Hence
<math display="block">
\begin{eqnarray*}
(\alpha^2 - \beta^2) + a \alpha + b
&=& \Bigl( -\frac{a}{2} \Bigr)^2 - \Bigl( \frac{\sqrt{4b - a^2}}{2} \Bigr)^2 + a \Bigl(- \frac{a}{2} \Bigr) + b\\
&=& \frac{a^2}{4} - b + \frac{a^2}{4} -\frac{a^2}{2} + b = 0,
\end{eqnarray*}
</math>
and also
<math display="block">
2 \alpha + a = 2 \Bigl(-\frac{a}{2} \Bigr) + a = -a + a = 0,
</math>
whence we get
<math display="block">
\begin{eqnarray*}
\frac{d^{2}y_1}{dx^2} + \alpha \frac {dy_1}{dx} + by_1 &=& e^{\alpha x}
(0 \cdot \cos \beta x - \beta \cdot 0 \cdot  \sin \beta x) \\
&=& 0,
\end{eqnarray*}
</math>
and so <math>y_1</math> is a solution. Assuming the analogous proof for <math>y_2</math>, it follows that <math>y</math>,
as defined by (3), is also a solution and the proof is complete.}}
It can be shown, although again we defer the proof, that if <math>y</math> is any real solution to the
differential equation (1), and if the roots <math>r_1</math> and <math>r_2</math> of the characteristic equation are complex, then
<span id{{=}}"eq6.8.4"/>
<math display="block">
\begin{equation}
y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} , 
\label{eq6.8.4}
\end{equation}
</math>
for some complex number <math>c_1</math> and its complex conjugate <math>c_2</math>. Hence, if the roots are complex, the general solution of the differential equation can be written either as (4), or in the equivalent form,
<span id{{=}}"eq6.8.5"/>
<math display="block">
\begin{equation}
y = e^{\alpha x}(k_{1} \cos \beta x + k_{2} \sin \beta x), 
\label{eq6.8.5}
\end{equation}
</math>
where <math>r_1 = \alpha + i\beta</math> and <math>r_2 = a - i \beta</math>, and <math>k_1</math> and <math>k_2</math> are arbitrary real numbers. Note that solutions (2) and (4) look the same, even though they involve different kinds of <math>r</math>'s and different kinds of <math>c</math>'s.
'''Example'''
Find the general solution of the differential equation
<math display="block">
\frac{d^{2}y}{dx^2} + 4 \frac{dy}{dx} + 13y = 0.
</math>
The characteristic equation is <math>t^2 + 4t + 13 = 0</math>. Using the quadratic formula, we find the roots
<math display="block">
\begin{eqnarray*}
r_1, r_2 &=& \frac{- 4 \pm \sqrt{16 - 4 \cdot 13}}{2} = \frac{- 4 \pm \sqrt{-36}}{2}\\
            &=& -2 \pm 3i.
\end{eqnarray*}
</math>
Hence, by (4), the general solution can be written
<math display="block">
y = c_{1}e^{(-2+3i)x}= + c_{2}e^{(-2-3i)x},
</math>
where <math>c_1</math> and <math>c_2</math> are complex conjugates of each other. Unless otherwise stated, however, the solution should appear as an obviously real-valued
function. That is, it should be written without the use of complex numbers as in (5). Hence the
preferred form of the general solution is
<math display="block">
y = e^{-2x}(k_{1} \cos 3x + k_{2} \sin 3x).
</math>
We now consider the remaining possibility.
\medskip
''Case 3.'' The characteristic equation <math>t^2 + at + b = 0</math> has only one root <math>r</math>. In this case,
we have <math>t^2 + at + b = (t - r)(t - r)</math>, and the quadratic formula yields <math>r = - \frac{a}{2}</math> and <math>\sqrt{a^2 - 4b} = 0</math>.
Theorem (8.2) is still valid, of course, and so one solution of the differtial equation <math>\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0</math> is obtained by taking <math>y = e^{rx}</math>.
We shall show that, in the case of only one root,
<math>xe^{rx}</math> is also a solution. Setting <math>y = xe^{rx}</math>, we obtain
<math display="block">
\begin{eqnarray*}
        \frac{dy}{dx} &=& e^{rx} + xre^{rx} = e^{rx} (1 + rx),\\
\frac{d^{2}y}{dx^2} &=& re^{rx}(1 + rx) + e^{rx} \cdot r\\
                            &=& re^{rx} (2 + rx).
\end{eqnarray*}
</math>
Hence
<math display="block">
\begin{eqnarray*}
\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by &=& re^{rx}(2 + rx) + ae^{rx}(1 + rx) + bxe^{rx}\\
&=& e^{rx}(2r + r^{2}x + a + arx + bx) \\
&=& e^{rx}[x(r^2 + ar + b) + (a + 2r)].
\end{eqnarray*}
</math>
Since <math>r</math> is a root of <math>t^2 + at + b</math>, we know that <math>r^2 + ar + b = 0</math>. Moreover, we have seen that <math>r = -\frac{a}{2}</math>, and so <math>a + 2r = 0</math>. It follows that the last expression in the above equations is equal to zero, which shows that the function <math>xe^{rx}</math> is a solution of the differential equation.
Thus <math>e^{rx}</math> is one solution, and <math>xe^{rx}</math> is another. It follows by (8.1) that, for any two real
numbers <math>c_1</math> and <math>c_2</math>, a solution is given by
<math display="block">
y = c_{1}xe^{rx} + c_{2}e^{rx} = (c_{1}x + c_{2})e^{rx},
</math>
Conversely, it can be shown that if <math>y</math> is any solution of the differential equation (1),
and if the characteristic equation has only one root <math>r</math>, then
<span id{{=}}"eq6.8.6"/>
<math display="block">
\begin{equation}
y = (c_{1}x + c_2)e^{rx}
\label{eq6.8.6}
\end{equation}
</math>
for some pair of real numbers <math>c_1</math> and <math>c_2</math>. The general solution in the case of a single root is therefore given by (6).
'''Example'''
Find the general solution of the differential equation <math>9y'' - 6y' + y = 0</math>. Here we
have used the common notation <math>y'</math> and <math>y''</math> for the first and second derivatives of the unknown function <math>y</math>. Dividing the equation by 9 to obtain a leading coefficient of 1, we get <math>y'' - \frac{2}{3}y' + \frac{1}{9}y = 0</math>, for which the characteristic equation is <math>t^2 - \frac{2}{3}t + \frac{1}{9} = 0</math>. Since <math>t^2 - \frac{2}{3}t + \frac{1}{9} = (t - \frac{1}{3})(t - \frac{1}{3})</math>, there is only one root, <math>r = 3</math>. Hence
<math display="block">
y = (c_{1}x + c_{2})e^{x/3}
</math>
is the general solution.
The solution of a differential equation can be checked just as simply as an indefinite integral, by
differentiation and substitution.
==General references==
{{cite web |title=Crowell and Slesnick’s Calculus with Analytic Geometry|url=https://math.dartmouth.edu/~doyle/docs/calc/calc.pdf |last=Doyle |first=Peter G.|date=2008 |access-date=Oct 29, 2024}}

Latest revision as of 20:53, 19 November 2024

[math] \newcommand{\ex}[1]{\item } \newcommand{\sx}{\item} \newcommand{\x}{\sx} \newcommand{\sxlab}[1]{} \newcommand{\xlab}{\sxlab} \newcommand{\prov}[1] {\quad #1} \newcommand{\provx}[1] {\quad \mbox{#1}} \newcommand{\intext}[1]{\quad \mbox{#1} \quad} \newcommand{\R}{\mathrm{\bf R}} \newcommand{\Q}{\mathrm{\bf Q}} \newcommand{\Z}{\mathrm{\bf Z}} \newcommand{\C}{\mathrm{\bf C}} \newcommand{\dt}{\textbf} \newcommand{\goesto}{\rightarrow} \newcommand{\ddxof}[1]{\frac{d #1}{d x}} \newcommand{\ddx}{\frac{d}{dx}} \newcommand{\ddt}{\frac{d}{dt}} \newcommand{\dydx}{\ddxof y} \newcommand{\nxder}[3]{\frac{d^{#1}{#2}}{d{#3}^{#1}}} \newcommand{\deriv}[2]{\frac{d^{#1}{#2}}{dx^{#1}}} \newcommand{\dist}{\mathrm{distance}} \newcommand{\arccot}{\mathrm{arccot\:}} \newcommand{\arccsc}{\mathrm{arccsc\:}} \newcommand{\arcsec}{\mathrm{arcsec\:}} \newcommand{\arctanh}{\mathrm{arctanh\:}} \newcommand{\arcsinh}{\mathrm{arcsinh\:}} \newcommand{\arccosh}{\mathrm{arccosh\:}} \newcommand{\sech}{\mathrm{sech\:}} \newcommand{\csch}{\mathrm{csch\:}} \newcommand{\conj}[1]{\overline{#1}} \newcommand{\mathds}{\mathbb} [/math]

In this section we shall show how to obtain the general solution of any differential equation of the form


[[math]] \begin{equation} \frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0, \label{eq6.8.1} \end{equation} [[/math]]

where [math]a[/math] and [math]b[/math] are real constants. Differential equations of this type occur frequently in mechanics and also in the theory of electric circuits. Equation (1) is a second- order differential equation, since it contains the second derivative [math]\frac{d^{2}y}{dx^2}[/math] but no higher derivative. It is called linear because each one of [math]y, \frac{dy}{dx}[/math], and [math]\frac{d^{2}y}{dx^2}[/math] occurs, if at all, to the first power. That is, if we set [math]\frac{dy}{dx} = z[/math] and [math]\frac{d^{2}y}{dx^2} = w[/math], then (1) becomes [math]w + az + by = 0[/math], and the left side is a linear polynomial, or polynomial of first degree, in [math]w, z[/math], and [math]y[/math]. A secondorder linear differential equation more general than (1) is


[[math]] \frac{d^{2}y}{dx^2} + f(x) \frac{dy}{dx} + g(x)y = h(x), [[/math]]

where [math]f[/math], [math]g[/math], and [math]h[/math] are given functions of [math]x[/math]. Equation (1) is a special case, called homogeneous, because [math]h[/math] is the zero function, and said to have constant coefilcients, since [math]f[/math] and [math]g[/math] are constant functions. Thus the topic of this section becomes: the study of second-order, linear, homogeneous differential equations with constant coefficients. An important and easily proved property of differential equations of this kind is the following:

Theorem

If [math]y_1[/math] and [math]y_2[/math] are any two solutions of the differential equation (1), and if [math]c_1[/math] and [math]c_2[/math] are any two real numbers, then [math]c_{1}y_{1} + c_{2}y_{2}[/math] is also a solution.


Show Proof

The proof uses only the elementary properties of the derivative. We know that

[[math]] \frac{d}{dx}(c_{1}y_{1} + c_{2}y_{2}) = c_{1} \frac{dy_1}{dx} + c_{2} \frac{dy_2}{dx}. [[/math]]
Hence

[[math]] \begin{eqnarray*} \frac{d^2}{{dx}^2} (c_{1}y_{1} + c_{2}y_{2}) &=& \frac{d}{dx}\Bigl(c_{1} \frac{dy_1}{dx} + c_{2} \frac{dy_2}{dx}\Bigr) \\ &=& c_{1} \frac{d^{2}y_1}{{dx}^2} + c_{2} \frac{d^{2}y_2}{{dx}^2}. \end{eqnarray*} [[/math]]
To test whether or not [math]c_{1}y_{1} + c_{2}y_{2}[/math] is a solution, we substitute it for [math]y[/math] in the differential equation:

[[math]] \begin{eqnarray*} && \frac{d^{2}}{dx^2} (c_{1}y_{1} + c_{2}y_{2}) + a \frac{d}{dx} (c_{1}y_{1} + c_{2}y_{2}) + b(c_{1}y_{1} + c_{2}y_{2})\\ &=& c_{1} \frac{d^{2}y_1}{dx^2} + c_{2} \frac{d^{2}y_2}{dx^2} + ac_{1} \frac{dy_1}{dx} + ac_{2} \frac{dy_2}{ dx} + bc_{1}y_{1} + bc_{2}y_{2}\\ &=& c_{1}\Bigl(\frac{d^{2}y_1}{dx^2} + a \frac{dy_1}{dx} + by_1\Bigr) + c_{2}\Bigl(\frac{d^{2}y_2}{dx^2} + a \frac{dy_2}{dx} + by_2\Bigr). \end{eqnarray*} [[/math]]
The expressions in parentheses in the last line are both zero because, [math]y_1[/math] and [math]y_2[/math] are by assumption solutions of the differential equation. Hence the top line is also zero, and so [math]c_{1}y_{1} + c_{2}y_{2}[/math] is a solution. This completes the proof.

It follows in particular that the sum and difference of any two solutions of (1) is a solution, and also that any constant multiple of a solution is again a solution. Finally, note that the constant function 0 is a solution of (1) for any constants [math]a[/math] and [math]b[/math]. In Section 5 of Chapter 5 we found that the general solution of the differential equation [math]\frac{dy}{dx} + ky = 0[/math] is the function [math]y = ce^{-kx}[/math], where [math]c[/math] is an arbitrary real number. This differential equation is first-order, linear, homogeneous, and with constant coefficients. Let us see whether by any chance an exponential function might also be a solution of the second-order differential equation [math]\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0[/math]. Let [math]y = e^{rx}[/math], where [math]r[/math] is any real number. Then

[[math]] \frac{dy}{dx} = re^{rx}, \;\;\; \frac{d^{2}y}{dx^2} = r^{2} e^{rx} . [[/math]]

Hence

[[math]] \begin{eqnarray*} \frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by &=& r^{2}e^{rx} + are^{rx} + be^{rx} \\ &=& (r^2 + ar + b)er^{rx}. \end{eqnarray*} [[/math]]


Since [math]e^{rx}[/math] is never zero, the right side is zero if and only if [math]r^2 + ar + b = 0[/math]. That is, we have shown that

Theorem

The function [math]e^{rx}[/math] is a solution of [math]\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0[/math] if and only if the real number [math]r[/math] is a solution of [math]t^2 + at + b = 0[/math].

The latter equation is called the characteristic equation of the differential equation. \medskip Example

Consider the differential equation [math]\frac{d^{2}y}{dx^2} - \frac{dy}{dx} - 6y = 0[/math]. Its characteristic equation is [math]t^2 - t - 6 = 0[/math]. Since [math]t^2 - t - 6 = (t - 3)(t + 2)[/math], the two solutions, or roots, are 3 and -2. Hence, by (8.2), both functions [math]e^{3x}[/math] and [math]e^{-2x}[/math] are solutions of the differential equation. It follows by (8.1) that the function

[[math]] y = c_{1} e^{3x} = + c_{2}e^{-2x} [[/math]]

is a solution for any two real numbers [math]c_1[/math] and [math]c_2[/math].

The form of the general solution of the differential equation

[[math]] \frac{d^{2}y}{dx^{2}} + a \frac{dy}{dx} + dx + by = 0 [[/math]]

depends on the roots of the characteristic equation [math]t^2 + at + b = 0[/math]. There are three different cases to be considered. \medskip Cuse 1. The characteristic equation has distinct real roots. This is the simplest case. We have

[[math]] t^2 + at + b = (t - r_1)(t - r_2), [[/math]]

where [math]r_1[/math], and [math]r_2[/math] are real numbers and [math]r_{1} \neq r_2[/math]. Both functions [math]e^{r_{1}x}[/math] and [math]e^{r_{2}x}[/math] are solutions of the differential equation, and so is any linear combination [math]c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x}[/math]. Moreover it can be shown, although we defer the proof until Chapter 11, that if [math]y[/math] is any solution of the differential equation, then


[[math]] \begin{equation} y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} , \label{eq6.8.2} \end{equation} [[/math]]


for some two real numbers [math]c_1[/math] and [math]c_2[/math]. Hence we say that (2) is the general solution. In Example 1 the function [math]c_{1}e^{3x} + c_{2}e^{-2x}[/math] is therefore the general solution of the differential equation [math]\frac{d^{2}y}{dx^2} - \frac{dy}{dx} - 6y = 0[/math]. \medskip Case 2. The characteristic equation has complex roots. The roots of [math]t^2 + at + b = 0[/math] are given by the quadratic formula

[[math]] r_{1}, r_{2} = \frac{-a \pm \sqrt{a^2 - 4b}}{2}. [[/math]]

Since [math]a[/math] and [math]b[/math] are real, [math]r_1[/math] and [math]r_2[/math] are complex if and only if [math]a^2 - 4b \lt 0[/math], which we now assume. Setting [math]\alpha = - \frac{a}{2}[/math] and [math]\beta = \frac{\sqrt{4b - a^2}}{2}[/math], we have

[[math]] r_1 = \alpha + i\beta, \;\;\; r_2 = \alpha - i\beta. [[/math]]

Note that [math]r_1[/math], and [math]r_2[/math] are complex conjugates of each other. Motivated by the situation in Case 1, in which [math]r_1[/math] and [math]r_2[/math] were real, we consider the complex-valued function [math]c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x}[/math], where we now allow [math]c_1[/math] and [math]c_2[/math], to be complex numbers. We shall show that

Theorem

If [math]c_1[/math] and [math]c_2[/math] are any two complex conjugates of each other and if [math]r_1[/math] and [math]r_2[/math] are complex solutions of the characteristic equation, then the function

[[math]] y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} [[/math]]

is real-valued. Moreover it is a solution of the differential equation (1).


Show Proof

Let [math]c_1 = \gamma + i\delta[/math] and [math]c_2 = \gamma - i\delta[/math]. Since [math]r_1 = \alpha + i\beta[/math] and [math]r_2 = \alpha - i\beta[/math], we have

[[math]] \begin{eqnarray*} c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} &=& (\gamma + i\delta) e^{(\alpha + i\beta)x} + (\gamma - i \delta) e^{(\alpha - i\beta)x}\\ &=& e^{\alpha x}[(\gamma + i \delta) e^{i\beta x} + (\gamma - i\delta) e^{-i\beta x}]. \end{eqnarray*} [[/math]]
Recall that [math]e^{i\beta x} = \cos \beta x + i \sin \beta x[/math] and [math]e^{-i\beta x} = \cos(\beta x) + i sin(-\beta x) = \cos \beta x - i \sin \beta x[/math]. Substituting, we get

[[math]] \begin{eqnarray*} c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} &=& e^{\alpha x} [(\gamma + i\delta)(\cos \beta x + i \sin \beta x) + (\gamma - i\delta)(\cos \beta x - i \sin \beta x)]\\ &=& e^{\alpha x} (2\gamma \cos \beta x - 2\delta \sin \beta x). \end{eqnarray*} [[/math]]
The right side is certainly real-valued, and this proves the first statement of the theorem. Since [math]\gamma[/math] and [math]\delta[/math] are arbitrary real numbers, so are [math]2\gamma[/math] and [math]-2\delta[/math]. We may therefore replace [math]2\gamma[/math] by [math]k_1[/math] and [math]-2\delta[/math] by [math]k_2[/math]. We prove the second statement of the theorem by showing that the function

[[math]] \begin{equation} y = e^{\alpha x}(k_{1} \cos \beta x + k_{2} \sin \beta x) \label{eq6.8.3} \end{equation} [[/math]]
is a solution of the differential equation. Let [math]y_1 = e^{\alpha x} \cos \beta x[/math] and [math]y_2 = e^{ax} \sin \beta x[/math]. Since [math]y = k_{1}y_1 + k_{2}y_2[/math], it follows by (8.1) that it is enough to show that [math]y_1[/math] and [math]y_2[/math] separately are solutions of the differential equation. We give the proof for [math]y_1[/math] and leave it to the reader to check it for [math]y_2[/math]. By the product rule,

[[math]] \begin{eqnarray*} \frac{dy_1}{dx} &=& \frac{d}{dx} (e^{\alpha x}\cos \beta x) = \alpha e^{\alpha x} \cos \beta x - \beta e^{\alpha x} \sin \beta x \\ &=& e^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x). \end{eqnarray*} [[/math]]
Hence

[[math]] \begin{eqnarray*} \frac{d^{2}y_1}{dx^2} &=& \alpha e^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x) + e^{\alpha x}(-\alpha \beta \sin \beta x - \beta^{2} \cos \beta x) \\ &=& e^{\alpha x} [(\alpha^2 - \beta^2) \cos \beta x - 2\alpha\beta \sin \beta x]. \end{eqnarray*} [[/math]]
Thus

[[math]] \begin{eqnarray*} \frac{d^{2}y_1}{dx^2} + a \frac{dy_1}{dx} + by_1 &=& e^{\alpha x}[(\alpha^2 - \beta^2) \cos \beta x - 2\alpha \beta \sin \beta x] \\ & & + ae^{\alpha x}(\alpha \cos \beta x - \beta \sin \beta x) + be^{\alpha x} \cos \beta x \\ &=& e^{\alpha x}([(\alpha^2 - \beta^2) + a\alpha + b] \cos \beta x - \beta (2\alpha + a) \sin \beta x). \end{eqnarray*} [[/math]]
But, remembering that [math]r_1 = \alpha + i \beta[/math] and [math]r_2 = \alpha - i \beta[/math] and that these are the roots of the characteristic equation, we read from the quadratic formula that

[[math]] \alpha = - \frac{a}{2}, \;\;\; \beta = \frac{\sqrt{4b - a^2}}{2}. [[/math]]
Hence

[[math]] \begin{eqnarray*} (\alpha^2 - \beta^2) + a \alpha + b &=& \Bigl( -\frac{a}{2} \Bigr)^2 - \Bigl( \frac{\sqrt{4b - a^2}}{2} \Bigr)^2 + a \Bigl(- \frac{a}{2} \Bigr) + b\\ &=& \frac{a^2}{4} - b + \frac{a^2}{4} -\frac{a^2}{2} + b = 0, \end{eqnarray*} [[/math]]
and also

[[math]] 2 \alpha + a = 2 \Bigl(-\frac{a}{2} \Bigr) + a = -a + a = 0, [[/math]]
whence we get

[[math]] \begin{eqnarray*} \frac{d^{2}y_1}{dx^2} + \alpha \frac {dy_1}{dx} + by_1 &=& e^{\alpha x} (0 \cdot \cos \beta x - \beta \cdot 0 \cdot \sin \beta x) \\ &=& 0, \end{eqnarray*} [[/math]]
and so [math]y_1[/math] is a solution. Assuming the analogous proof for [math]y_2[/math], it follows that [math]y[/math], as defined by (3), is also a solution and the proof is complete.

It can be shown, although again we defer the proof, that if [math]y[/math] is any real solution to the differential equation (1), and if the roots [math]r_1[/math] and [math]r_2[/math] of the characteristic equation are complex, then

[[math]] \begin{equation} y = c_{1}e^{r_{1}x} + c_{2}e^{r_{2}x} , \label{eq6.8.4} \end{equation} [[/math]]

for some complex number [math]c_1[/math] and its complex conjugate [math]c_2[/math]. Hence, if the roots are complex, the general solution of the differential equation can be written either as (4), or in the equivalent form,

[[math]] \begin{equation} y = e^{\alpha x}(k_{1} \cos \beta x + k_{2} \sin \beta x), \label{eq6.8.5} \end{equation} [[/math]]

where [math]r_1 = \alpha + i\beta[/math] and [math]r_2 = a - i \beta[/math], and [math]k_1[/math] and [math]k_2[/math] are arbitrary real numbers. Note that solutions (2) and (4) look the same, even though they involve different kinds of [math]r[/math]'s and different kinds of [math]c[/math]'s.

Example

Find the general solution of the differential equation

[[math]] \frac{d^{2}y}{dx^2} + 4 \frac{dy}{dx} + 13y = 0. [[/math]]

The characteristic equation is [math]t^2 + 4t + 13 = 0[/math]. Using the quadratic formula, we find the roots

[[math]] \begin{eqnarray*} r_1, r_2 &=& \frac{- 4 \pm \sqrt{16 - 4 \cdot 13}}{2} = \frac{- 4 \pm \sqrt{-36}}{2}\\ &=& -2 \pm 3i. \end{eqnarray*} [[/math]]


Hence, by (4), the general solution can be written

[[math]] y = c_{1}e^{(-2+3i)x}= + c_{2}e^{(-2-3i)x}, [[/math]]

where [math]c_1[/math] and [math]c_2[/math] are complex conjugates of each other. Unless otherwise stated, however, the solution should appear as an obviously real-valued function. That is, it should be written without the use of complex numbers as in (5). Hence the preferred form of the general solution is

[[math]] y = e^{-2x}(k_{1} \cos 3x + k_{2} \sin 3x). [[/math]]

We now consider the remaining possibility. \medskip Case 3. The characteristic equation [math]t^2 + at + b = 0[/math] has only one root [math]r[/math]. In this case, we have [math]t^2 + at + b = (t - r)(t - r)[/math], and the quadratic formula yields [math]r = - \frac{a}{2}[/math] and [math]\sqrt{a^2 - 4b} = 0[/math]. Theorem (8.2) is still valid, of course, and so one solution of the differtial equation [math]\frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by = 0[/math] is obtained by taking [math]y = e^{rx}[/math]. We shall show that, in the case of only one root, [math]xe^{rx}[/math] is also a solution. Setting [math]y = xe^{rx}[/math], we obtain

[[math]] \begin{eqnarray*} \frac{dy}{dx} &=& e^{rx} + xre^{rx} = e^{rx} (1 + rx),\\ \frac{d^{2}y}{dx^2} &=& re^{rx}(1 + rx) + e^{rx} \cdot r\\ &=& re^{rx} (2 + rx). \end{eqnarray*} [[/math]]


Hence

[[math]] \begin{eqnarray*} \frac{d^{2}y}{dx^2} + a \frac{dy}{dx} + by &=& re^{rx}(2 + rx) + ae^{rx}(1 + rx) + bxe^{rx}\\ &=& e^{rx}(2r + r^{2}x + a + arx + bx) \\ &=& e^{rx}[x(r^2 + ar + b) + (a + 2r)]. \end{eqnarray*} [[/math]]


Since [math]r[/math] is a root of [math]t^2 + at + b[/math], we know that [math]r^2 + ar + b = 0[/math]. Moreover, we have seen that [math]r = -\frac{a}{2}[/math], and so [math]a + 2r = 0[/math]. It follows that the last expression in the above equations is equal to zero, which shows that the function [math]xe^{rx}[/math] is a solution of the differential equation. Thus [math]e^{rx}[/math] is one solution, and [math]xe^{rx}[/math] is another. It follows by (8.1) that, for any two real numbers [math]c_1[/math] and [math]c_2[/math], a solution is given by

[[math]] y = c_{1}xe^{rx} + c_{2}e^{rx} = (c_{1}x + c_{2})e^{rx}, [[/math]]

Conversely, it can be shown that if [math]y[/math] is any solution of the differential equation (1), and if the characteristic equation has only one root [math]r[/math], then


[[math]] \begin{equation} y = (c_{1}x + c_2)e^{rx} \label{eq6.8.6} \end{equation} [[/math]]


for some pair of real numbers [math]c_1[/math] and [math]c_2[/math]. The general solution in the case of a single root is therefore given by (6).

Example

Find the general solution of the differential equation [math]9y'' - 6y' + y = 0[/math]. Here we have used the common notation [math]y'[/math] and [math]y''[/math] for the first and second derivatives of the unknown function [math]y[/math]. Dividing the equation by 9 to obtain a leading coefficient of 1, we get [math]y'' - \frac{2}{3}y' + \frac{1}{9}y = 0[/math], for which the characteristic equation is [math]t^2 - \frac{2}{3}t + \frac{1}{9} = 0[/math]. Since [math]t^2 - \frac{2}{3}t + \frac{1}{9} = (t - \frac{1}{3})(t - \frac{1}{3})[/math], there is only one root, [math]r = 3[/math]. Hence

[[math]] y = (c_{1}x + c_{2})e^{x/3} [[/math]]

is the general solution.

The solution of a differential equation can be checked just as simply as an indefinite integral, by differentiation and substitution.

General references

Doyle, Peter G. (2008). "Crowell and Slesnick's Calculus with Analytic Geometry" (PDF). Retrieved Oct 29, 2024.