Homogeneous linear differential equations with constant coefficients

Consider the homogeneous linear differential equation

\[y^{(n)} + a_{n-1}y^{(n-1)} + \cdots + a_1y' + a_0 y = 0 \tag{Eq:code}\]

where \(a_0, a_1, \cdots, a_{n-1} \in \mathbb{R}\) are constants. 



Using the differential operator \(D = \frac{d}{dx}\) and its polynomial

\[E = D^n + a_{n-1}D^{n-1} + \cdots + a_1D + a_0, \tag{Eq:epoly}\]

(Eq:code) can be expressed as

\[Ey = 0.\]

Now, consider the polynomial of variable \(t\)

\[F(t) = t^n + a_{n-1}t^{n-1} + \cdots + a_1t + a_0.\]

Then, we have

\[E = F(D)\]

and (Eq:code) is expressed as

\[F(D)y = 0.\]

We can use the properties of polynomials to solve this type of differential equation. We need some results from algebra.

Definition (Relatively prime, coprime)

Polynomials \(F_1(t)\) and \(F_2(t)\) are said to be relatively prime or coprime if the equations \(F_1(t) = 0\) and \(F_2(t) = 0\) do not have common solutions within \(\mathbb{C}\).

Lemma

The polynomials \(F_1(t)\) and \(F_2(t)\) in \(t\) are relatively prime if and only if there exist polynomials \(G_1(t)\) and \(G_2(t)\) such that

\[G_1(t)F_1(t) + G_2(t)F_2(t) = 1.\]

Proof. Omitted. ■

Remark. The proof of this lemma is based on the Euclidean Algorithm for finding the greatest common divisor (GCD) of polynomials. The right-hand side of the above equation being equal to 1 indicates that the GCD of \(F_1(t)\) and \(F_2(t)\) is 1 (or any constant, but not a polynomial of \(t\)), that is, they are relatively prime (just like integers). □

See also: Polynomial greatest common divisor (Wikipedia)

Example. Let \(F_1(t) = t-1\) and \(F_2(t) = t-2\). These are clearly relatively prime, and

\[G_1(t)(t-1) + G_2(t) (t-2) = 1\]

where \(G_1(t) = 1\) and \(G_2(t) = -1\). □


Example. Let \(F_1(t) = t^3 - 3t^2 + 2t + 3\) and \(F_2(t) = t^2 + 1\). By dividing \(F_1(t)\) by \(F_2(t)\), we have

\[F_1(t) = (t-3)F_2(t)  + t+6.\tag{eg. 1}\]

By dividing \(F_2(t) = t^2 + 1\) by \(t+6\), we have

\[F_2(t) = (t-6)(t+6) + 37.\tag{eg. 2}\]

From this, we can see that the greatest common divisor of \(F_1(t)\) and \(F_2(t)\) is 1 (we use the convention that the coefficient of the leading term of a GCD is 1), which means they are relatively prime.

From (eg. 2), 

\[37 = F_2(t) - (t-6)(t+6).\]

Using (eg. 1),

\[\begin{eqnarray}37 &=& F_2(t) - (t-6)(F_1(t) - (t-3)F_2(t))\\ &=& -(t-6)F_1(t) + (t^2 -9t + 19)F_2(t). \end{eqnarray}\]

Setting \(G_1(t) = -\frac{t-6}{37}\) and \(G_2(t) = \frac{t^2 -9t + 19}{37}\), we have

\[G_1(t)F_1(t) + G_2(t)F_2(t) = 1.\]

Example. Let \(F_1(t) = t^3 - 8 = (t-2)(t^2 +2t + 4)\) and \(F_2(t) = t^2 - 4t + 4 = (t-2)^2\). The greatest common divisor is \(t-2\), and hence they are not relatively prime. Suppose there are polynomials \(G_1(t)\) and \(G_2(t)\) such that \[G_1(t)F_1(t) + G_2(t)F_2(t) = 1\] holds. The left-hand side is divisible by \(t-2\), whereas the right-hand side is not. This is a contradiction. Therefore, there are no such polynomials as \(G_1(t)\) and \(G_2(t)\). □


Theorem (Solution of \(F_1(D)F_2(D)y = 0\) when \(F_1(t)\) and \(F_2(t)\) are relatively prime)

Let \(F_1(t)\) and \(F_2(t)\) be polynomials in \(t\) that are relatively prime, and \(F(t) = F_1(t)F_2(t)\). Then, any solution of \(F(D)y = 0\) is of the form \(y = y_1 + y_2\) where \(y_1\) and \(y_2\) are solutions of \(F_1(D)y = 0\) and \(F_2(D)y = 0\), respectively. Conversely, any function of the form \(y = y_1 + y_2\), where \(y_1\) and \(y_2\) are solutions of \(F_1(D)y = 0\) and \(F_2(D)y = 0\), respectively, is a solution of \(F(D)y = 0\).

Proof. Suppose \(y\) is a solution of \(F(D)y = F_1(D)F_2(D)y = 0\). By the above Lemma, there exist polynomials \(G_1(t)\) and \(G_2(t)\) such that \(G_1(t)F_1(t)  + G_2(t)F_2(t) = 1\). Using these polynomials, let us define

\[\begin{eqnarray*} y_1 &=& G_2(D)F_2(D)y,\\ y_2 &=& G_1(D)F_1(D)y. \end{eqnarray*}\]

Then, we have \(y = y_1 + y_2\). Now,

\[\begin{eqnarray*} F_1(D)y_1 &=& F_1(D)G_2(D)F_2(D)y \\ &=& G_2(D)F_1(D)F_2(D)y ~~ \text{(commutative law)}\\ &=& G_2(D)F(D)y = 0. \end{eqnarray*}\]

Thus, \(y_1\) is a solution of \(F_1(D)y = 0\). Similarly, \(y_2\) is a solution of \(F_2(D)y = 0\).

Conversely, suppose \(y_1\) and \(y_2\) are solutions of \(F_1(D)y=0\) and \(F_2(D)y = 0\), respectively, and let \(y = y_1 + y_2\). Then,

\[\begin{eqnarray*} F(D)y &=& F(D)(y_1 + y_2) \\ &=& F_2(D)F_1(D)y_1 + F_1(D)F_2(D)y_2\\ &=& 0. \end{eqnarray*}\]

Thus, \(y\) is a solution of \(F(D)y = 0\). ■


Example. Let us find the general solution of

\[y'' -3y' + 2y = 0.\]

Using the differential operator, this can be written as

\[(D^2 -3D +2)y = 0.\]

The operator can be factorized as

\[D^2 - 3D + 2 = (D-1)(D-2)\]

and \(D-1\) and \(D-2\) are relatively prime. The solution of \((D-1)y = 0\) is \(y = C_1e^x\), and that of \((D-2)y = 0\) is \(y = C_2e^{2x}\). Thus, the general solution of \((D-1)(D-2)y = 0\) is

\[ y = C_1e^x + C_2e^{2x}\]

where \(C_1\) and \(C_2\) are constants. □

Complex-valued functions

To treat more general linear equations, it is convenient to use complex-valued functions. Before proceeding, please read the Calculus of complex-valued functions.

We use the following theorem without proof.

Theorem (Fundamental Theorem of Algebra)

A polynomial equation of degree \(n\) with coefficients in \(\mathbb{R}\),

\[a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0 = 0, ~~ a_0,a_1,\cdots,a_n\in\mathbb{R}, a_n \neq 0,\]

has exactly \(n\) (possibly redundant) solutions in \(\mathbb{C}\). 

This theorem implies that it is always possible to factorize the above polynomial of degree \(n\) as

\[a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0 = a_n(x - \alpha_1)^{m_1}(x - \alpha_2)^{m_2}\cdots (x-\alpha_l)^{m_l}\]

where \(l\) is some natural number, \(\alpha_1, \alpha_2, \cdots, \alpha_l\in \mathbb{C}\) are the distinct solutions, and \(m_1, m_2, \cdots, m_l\in \mathbb{N}\) are the multiplicities of the solutions with \(m_1 + m_2 + \cdots + m_l = n\).

Remark. It is easy to show that if \(\alpha \in \mathbb{C}\) is a solution of a polynomial equation with real coefficients, then its complex conjugate \(\bar{\alpha}\) is also a solution (exercise!) □


Theorem (Solution of \((D-\alpha)^my = 0\))

Let \(F(t) = (t-\alpha)^m\) where \(\alpha\in\mathbb{C}\) and \(m\in\mathbb{N}\). The general solution of the differential equation \(F(D)y = 0\) is given by

\[y = c_0e^{\alpha x} + c_1xe^{\alpha x} + \cdots + c_{m-1}x^{m-1}e^{\alpha x}\tag{eq:sol1}\]

where \(c_0, c_1, \cdots, c_{m-1} \in \mathbb{C}\) are constants.

Proof. Let \(h(x)\) be an arbitrary function of \(x\). We have

\[(D-\alpha)h(x)e^{\alpha x} = h'(x)e^{\alpha x} + h(x)\alpha e^{\alpha x} - \alpha h(x)e^{\alpha x} = h'(x)e^{\alpha x}.\]

By repeating this, we have, in general

\[(D-\alpha)^kh(x)e^{\alpha x} = h^{(k)}(x)e^{\alpha x}.\]

If \(h(x)\) is a polynomial of degree less than \(m\), then \((D-\alpha)^{m}h(x)e^{\alpha x} = 0\). Therefore, the function of the form of (eq:sol1) is a solution of \(F(D)y = 0\).

Conversely, suppose that \(y = y(x)\) is a solution of \(F(D)y = 0\) and let \(h(x) = y(x)e^{-\alpha x}\). We have \(y(x) = h(x)e^{\alpha x}\). From what we have shown above,  \(F(D)y = h^{(m)}(x)e^{\alpha x}\). But \(F(D)y = 0\) so \(h^{(m)}(x)e^{\alpha x} = 0\), and hence \(h^{(m)}(x) = 0\). This indicates that \(h(x)\) is a polynomial of degree at most \((m-1)\). Thus, \(y(x) = h(x)e^{\alpha x}\) has the form of (eq:sol1). ■


Example (Harmonic oscillator). The equation of motion of a harmonic oscillator with spring constant \(k\) and mass \(m\) is given by

\[m\frac{d^2x}{dt^2} = -kx.\]

Let \(\omega = \sqrt{\frac{k}{m}}\). This become

\[(D^2 + \omega^2)x = 0.\]

But \(D^2 + \omega^2 = (D - i\omega)(D +i\omega)\), so we need to solve 

\[(D - i\omega)x = 0\]

and

\[(D + i\omega)x = 0.\]

From the former, we have \(x_1(t) = C_1e^{i\omega t}\). From the latter, we have \(x_2(t) = C_2e^{-i\omega t}\). Therefore, the general solution is 

\[x(t) = C_1e^{i\omega t} + C_2e^{-i\omega t}.\]

But, we note that \(x(t)\) should be a real function because it describes a physical quantity (the coordinate of the mass point). Let's rewrite the above solution using Euler's formula,

\[x(t) = (C_1 + C_2)\cos (\omega t) + i(C_1 - C_2)\sin(\omega t).\]

For this to be a real function, \(C_1 + C_2\) must be real and \(C_1 - C_2\) must be purely imaginary. This can be achieved if we set \(C_2 = \bar{C_1}\) (i.e., \(C_1\) and \(C_2\) are complex conjugate of each other). By setting \(A = C_1 + C_2\) and \(B = i(C_1 - C_2)\), we can rewrite the solution as

\[x(t) = A\cos\left(\sqrt{\frac{k}{m}} t\right) + B\sin\left(\sqrt{\frac{k}{m}}t\right).\]

This is indeed a real-valued function if \(A\) and \(B\) are real constants. □


Example. Let us solve

\[y''' + y'' + y' - 3y = 0.\]

Let \(F(t) = t^3 + t^2 + t - 3\). The above ODE is \(F(D)y = 0\). Since 

\[F(t) = (t-1)(t^2 + 2t + 3) = (t-1)(t - \alpha)(t - \bar{\alpha})\] where \(\alpha = -1 +i\sqrt{2}\), the general solution is the sum of the solutions of \((D-1)y = 0\), \((D - \alpha)y = 0\), and \((D - \bar{\alpha})y = 0\). From these, we have \(y = Ce^x\), \(y = C_1e^{\alpha x}\), and \(y = C_2e^{\bar{\alpha} x}\), respectively. Thus, the general solution is

\[y = Ce^{x} + C_1e^{(-1+i\sqrt{2}) x} + C_2e^{(-1 -i\sqrt{2}) x}.\]

To make this a real function, we set \(A = C_1 + C_2\) and \(B = i(C_1 - C_2)\) and use Euler's formula to have

\[y = Ce^{x} + Ae^{-x}\cos(\sqrt{2}x) + Be^{-x}\sin(\sqrt{2}x).\]

Note that this is a real-valued function if all the constants \(A, B\), and \(C\) are real. □


Comments

Popular posts from this blog

Open sets and closed sets in \(\mathbb{R}^n\)

Euclidean spaces

Newton's method