Posts

Showing posts from January, 2023

First-order linear differential equations

Image
In this post, we'll see how we solve first-order linear differential equations. Consider the following first-order homogeneous linear differential equation \[y' +p(x)y = 0. \tag{Eq:homdiff}\]  By separating variables, we have \[\frac{dy}{y} = -p(x)dx.\] Integrating both sides gives \[\log|y| = -\int p(x)dx + c\] so that \[y = Ce^{-\int p(x)dx}\tag{Eq:homsol}\] where \(C\) is a constant. Example . Let's solve \[y' + 2y = 0.\] By separating variables, we have \[\frac{dy}{y} = -2dx.\] Integrating both sides, \[\log|y| = -2x + c.\] Exponentiating both sides, we have \[y = Ce^{-2x}.\] where \(C\) is a constant. □ Method of variation of parameters Next, consider the inhomogeneous differential equation \[y' + p(x)y + q(x) = 0.  \tag{Eq:inhomdiff}\] As we have learned in a previous post, we need to find one special solution to construct the general solution. How do we find a special solution? See also : Linear differential equations: Introduction Here's one way. This is

Linear differential equations: Introduction

Image
Let \(q(x), p_0(x), p_1(x), \cdots, p_{n-1}(x)\) be functions of \(x\). The equation \[y^{(n)} + p_{n-1}(x)y^{(n-1)} + \cdots + p_1(x)y' + p_0(x)y + q(x) = 0\tag{Eq:linode}\] of an unknown function \(y = y(x)\) is called an \(n\)-th order linear differential equation . If \(q(x) = 0\), then (Eq:linode) is said to be homogeneous ; otherwise, it is said to be inhomogeneous . Let's rewrite (Eq:linode) using differential operators. Let \(D = \frac{d}{dx}\) denote the differential operator with respect to \(x\). That is, \(Dy = \frac{d}{dx}y = y'\) and \(D^ny = \frac{d^n}{dx^n}y = y^{(n)}\), etc. By combining these operators, we can define a new operator \(E\) by \[E = D^n + p_{n-1}(x)D^{n-1} + \cdots + p_1(x)D + p_0(x). \tag{eq:diffop}\] Then, (Eq:linode) is concisely denoted as \[Ey + q(x) = 0.\] Theorem (Linear combinations of solutions of a homogeneous linear ODE) Let \(y_1(x)\) and \(y_2(x)\) be functions. For any \(a, b \in \mathbb{R}\), the following holds: \[E(ay_1 + by_

Exact differential equations

Image
Consider the differential equation of the form \[P(x,y)dx + Q(x,y)dy = 0 \tag{Eq:exact}\]  where \(P(x,y)\) and \(Q(x,y)\) are some bivariate functions. In this differential equation, the variables \(x\) and \(y\) have equal status (neither is an independent nor dependent variable). Thus, the solution should also be given as some equation of \(x\) and \(y\). If necessary, \(y\) may be interpreted as an implicit function of \(x\). The above differential equation (Eq:exact) is said to be an exact or total differential equation if there exists a function \(F(x,y)\) of class \(C^1\) such that \[\begin{eqnarray*} F_x(x,y) &=& P(x,y),\\ F_y(x,y) &=& Q(x,y). \end{eqnarray*}\] This function \(F(x,y)\) is called a potential function . Remark . Recall that \[\begin{eqnarray} F_x(x,y) &=& \frac{\partial F}{\partial x}(x,y),\\ F_y(x,y) &=& \frac{\partial F}{\partial y}(x,y). \end{eqnarray}\] □ As the following theorem shows, exact differential equa

Separable differential equations

Image
Separable differential equations are the simplest differential equations. They are of the following form: \[y' - f(x)g(y) = 0\] where \(f(x)\) is a function of \(x\) and \(g(y)\) is a function of \(y\). This equation can be formally rearranged into \[\frac{dy}{g(y)} = f(x)dx\] which can be integrated as \[\int\frac{dy}{g(y)} = \int f(x)dx.\] This method of solving differential equations is called the separation of variables . Example . Let us solve the differential equation \[y' = xy\]. If \(y = 0\) (constant), the given differential equation is clearly satisfied. Thus, \(y = 0\) is a solution. Next, suppose \(y \neq 0\). By separating variables, we have \[\frac{dy}{y} = xdx.\] Integrating both sides, \[\log y = \int\frac{dy}{y} = \int xdx = \frac{1}{2}x^2 + c\] where \(c\) is a constant. Thus, we have \[y = C e^{\frac{1}{2}x^2}\tag{Eq:egode}\] where we set \(C = e^c\). But this solution includes the case when \(y = 0\) (constant) if we set \(C = 0\). Thus, the general solutio

Differential equations: Introduction

Image
An equation involving the derivatives of a (univariate) function \(y = y(x)\) of \(x\) is called an ordinary differential equation . That is, an ordinary differential equation (ODE) is an equation of the form \[F(x, y, y', \cdots, y^{(n)}) = 0\tag{Eq:ode}\] where \(F(x, z_0, z_1, \cdots, z_n)\) is a function of \((n+2)\) variables. If the highest order of the derivatives involved in a differential equation is \(n\), then it is called an \(n\)-th order (ordinary) differential equation. Example .  \(3y - xy' + 2(y')^2 = 0\) is a (non-linear) first-order differential equation. \(3y - xy' + 2y'' = 0\) is a (linear) second-order differential equation. □ If the function \(y = y(x)\) on an interval \(I\) satisfies (Eq:ode) for any \(x \in I\), that is, \[F(x, y(x), y'(x), \cdots, y^{(n)}(x)) = 0,\] then \(y = y(x)\) is said to be a solution of the differential equation (Eq:ode) on \(I\). Example . Consider the second-order differential equation \[y'' + y =

Taylor series, Maclaurin series

Image
Suppose the function \(f(x)\) is of class \(C^\infty\) in the neighbor of \(x = a\). Then, we can define the following power series: \[\begin{eqnarray*} \sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}t^n &=& \sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n\\ &=&f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + \cdots \end{eqnarray*}\] where \(t = x - a\). If this power series has a positive radius of convergence, and the function defined by it matches \(f(x)\) in the neighbor of \(x = a\), we say the function \(f(x)\) is analytic . \(f(x) = \sum_{n=0}^{\infty}\frac{f^{(n)}(a)}{n!}(x-a)^n\) is called the Taylor series of \(f(x)\) at \(x=a\). A Taylor series at \(x = 0\) is called a Maclaurin series . Example . Let us define the function \(f(x)\) on the open interval \((a-r, a+r)\) by the following power series \[f(x) = \sum_{n=0}^{\infty}a_n(x-a)^n\tag{Eq:eg1}\] where \(r > 0\) is the radius of convergence of the pow

Uniform convergence of sequence of functions

Image
Consider a sequence of functions \(\{f_n(x)\}\) on interval \(I\): \[f_1(x), f_2(x), f_3(x), \cdots.\] This is an indexed collection of functions on \(I\). We can consider different types of convergence of such a sequence of functions: \(f_n(x) \to f(x)\). If the convergence is ``uniform,'' then some properties of the functions \(f_n(x)\), such as continuity, integrability, and differentiability, are inherited to the limiting function \(f(x)\). Given a sequence of functions \(\{f_n(x)\}\) on \(I\), we can define a sequence of real numbers \(\{f_n(a)\}\) for each \(a \in I\). If the sequence \(\{f_n(a)\}\) converges for each \(a\in I\), we can define its limit as \(f(a)\) thereby defining a function \(f(x)\) on \(I\). In this case, we say the sequence of functions \(\{f_n(x)\}\) converges to the function \(f(x)\) and write \[\lim_{n\to\infty}f_n(x) = f(x).\] We refer to this type of convergence as point-wise convergence .  Remark . In a logical form, the point-wise convergence

Calculus of power series

Image
Functions defined by power series If the power series \(\sum_{n=0}^{\infty}a_nx^n\) has the radius of convergence \(r > 0\), then it defines a function \(f(x) = \sum_{n=0}^{\infty}a_nx^n\) on open interval \((-r, r)\). Theorem (Continuous power series) Let \(\sum_{n=0}^{\infty}a_nx^n\) be a power series with its radius of convergence \(r > 0\). Then, the function \(f(x) = \sum_{n=0}^{\infty}a_nx^n\) is continuous on \((-r, r)\). Proof . It suffices to show that \(f(x)\) is continuous on open interval \(I = (-s, s)\) where \(s\) is an arbitrary real number such that \(0 < s < r\). Let \(t = \frac{r + s}{2}\). Then \(s < t < r\) and the power series \(\sum_{n=0}^{\infty}a_nx^n\) converges absolutely at \(x = t\). Step 1 . For the partial sum \(f_n(x) = \sum_{k=0}^{n}a_kx^k\), we show the following: (*) For any \(\varepsilon > 0\), there exists an \(N\in\mathbb{N}_0\) such that, for all \(n\in\mathbb{N}_0\) and all \(x\in I\), if \(n \geq N\), then \(|f(x) - f_n(x)| &

Limit superior, limit inferior

Image
Some sequences of practical importance (in science and engineering) may not have limits but have limits superior or limits inferior . While converging sequences are bounded, bounded sequences are not always converging. However, bounded sequences always have limits superior and limits inferior. First, let us review some basic notions. Definition (Upper bound, lower bound) Let \(S\) be a subset of \(\mathbb{R}\): \(S \subset \mathbb{R}\). Let \(\alpha \in \mathbb{R}\) be a real number If we have \(x \leq \alpha\) for all \(x \in S\), then \(\alpha\) is said to be an upper bound of \(S\).  If we have \(x \geq \alpha\) for all \(x \in S\), then \(\alpha\) is said to be a lower bound of \(S\).  Definition (Supremum, infimum) Let \(S\) be a subset of \(\mathbb{R}\): \(S \subset \mathbb{R}\). If \(S\) is bounded above, and there exists the lowest value of the upper bounds, then the lowest upper bound is called the supremum of \(S\), denoted \(\sup S\). That is, if \(U(S)\) represents the s

Power series

Image
In this post, we deal with a class of series called the power series that contains a variable. Definition (Power series) Let \(\{a_n\}\) be a sequence of real numbers, \(b\) a real number, and \(x\) a variable \(x\). The series given by \[\sum_{n=0}^{\infty}a_n(x-b)^n = a_0 + a_1(x-b) + a_2(x-b)^2 + \cdots\tag{Eq:PS}\] is called a power series centered at \(x=b\). If we set \(t = x - b\) in (Eq:PS), we obtain a power series centered at \(t = 0\). In most practical cases, it suffices to deal with power series centered at \(x = 0\). Example . A polynomial of \(x\), \(f(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n\) can be regarded as a power series by setting \(a_{n+1} = a_{n+2} = \cdots = 0\). In general, the power series \(\sum_{n=0}^{\infty}a_nx^n\) is a polynomial of \(x\) if \(a_n = 0\) for all but finitely many \(n\). □ If the power series \(\sum_{n=0}^{\infty}a_nx^n\) is a polynomial, we can substitute an arbitrary real number to \(x\) to calculate the sum. If it is not a polynomia

Convergence of series

Image
Absolute convergence As we have seen in a previous post, if a positive term series has a sum, then the sum does not depend on the order of addition. However, it is not necessarily the case for general series. If a series converges absolutely, then it has some nice properties similar to positive term series. See also : Series: Introduction Definition (Absolute convergence, conditional convergence) The series \(\sum_{n=0}^{\infty}a_n\) is said to converge absolutely if the series \(\sum_{n=0}^{\infty}|a_n|\) has a sum. A series is said to converge conditionally if it has a sum but does not converge absolutely. Remark . In other words, the series \(\sum_{n=0}^{\infty}a_n\) converges conditionally if \(\sum_{n=0}^{\infty}a_n\) converges and \(\sum_{n=0}^{\infty}|a_n|\) diverges to \(+\infty\). □ Theorem (Absolutely converging series has a unique sum) Suppose that the series \(\sum_{n=0}^{\infty}a_n\) converges absolutely. Then, the following hold: The series \(\sum_{n=0}^{\infty}a_n\) ha

Series: Introduction

Image
Given a sequence \(\{a_n\}\), the expression \[\sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + \cdots\] is called a series (or infinite series ). This expression may or may not have value. At this point, it is purely formal. Note that the order of addition matters : We first add \(a_0\) and \(a_1\), to the result of which we add \(a_2\), to the result of which we add \(a_3\), and so on (Not something like we first add \(a_{101}\) and \(a_{58}\), then add \(a_{333051}\), and so on). We will see, however, that for a special class of series (the positive term series), the order of addition does not matter if the series converges. Example . The sum of a geometric progression \(\{ar^n\}\), that is, \(\sum_{n=0}^{\infty}ar^n\) is called a geometric series . It is understood that \(r^0 = 1\) including the case when \(r = 0\). □ Given a series \(\sum_{n=0}^{\infty}a_n\) and a number \(n\geq 0\), the sum \[\sum_{k=0}^{n}a_k = a_0 + a_1 + \cdots + a_n\] is called the \(n\)-th partial sum . We may d

Improper multiple integrals

Image
Consider integrating a function \(f(x,y)\) over a region \(D\) which may not be bounded or closed. In the case of a univariate function, this corresponds to the improper integral where we took the limits of the endpoints of a closed interval. In the case of multiple integrals, we adopt the notion of a "sequence of regions." Consider a sequence of regions \(\{K_n\}\) where each \(K_n\) is a subset of \(\mathbb{R}^2\) that satisfies the following conditions: (a) \(K_1 \subset K_2\)\(\subset \cdots \subset\) \(K_n \subset K_{n+1} \subset \cdots\). (b) For all \(n\in \mathbb{N}\), \(K_n \subset D\). (c) For all \(n \in\mathbb{N}\), \(K_n\) is bounded and closed. (d) For any bounded closed set \(F\) that is included in \(D\) (i.e., \(F \subset D\)), if \(n\) is sufficiently large, then \(F \subset K_n\).  In other words: for all bounded closed \(F \subset D\), there exists some \(N\in \mathbb{N}\) such that, for all \(n\in \mathbb{N}\), if \(n \geq N\) then \(F \subset K_n\). Such

Applications of multiple integrals

Image
We can use multiple integrals to compute areas and volumes of various shapes. Area of a planar region Definition (Area) Let \(D\) be a bounded closed region in \(\mathbb{R}^2\). \(D\) is said to have an area if the multiple integral of the constant function 1 over \(D\), \(\iint_Ddxdy\), exists. Its value is denoted by \(\mu(D)\): \[\mu(D) = \iint_Ddxdy.\] Example . Let us calculate the area of the disk \(D = \{(x,y)\mid x^2 + y^2 \leq a^2\}\). Using the polar coordinates, \(x = r\cos\theta, y = r\sin\theta\), \(dxdy = rdrd\theta\), and the ranges of \(r\) and \(\theta\) are \([0, a]\) and \([0, 2\pi]\), respectively. Thus, \[\begin{eqnarray*} \mu(D) &=& \iint_Ddxdy\\ &=&\int_0^a\left(\int_0^{2\pi}rd\theta\right)dr\\ &=&2\pi\int_0^a rdr\\ &=&2\pi\left[\frac{r^2}{2}\right]_0^a = \pi a^2. \end{eqnarray*}\] □ Volume of a solid figure Definition (Volume) Let \(V\) be a solid figure in the \((x,y,z)\) space \(\mathbb{R}^3\). \(V\) is sai