where \(a_n, a_{n-1}, \cdots, a_0 \in\mathbb{R}\) is a continuous function on \(\mathbb{R}\). Such functions are called polynomial functions.
If \(g(x)\) and \(h(x)\) are polynomial functions such that \(h(x) \neq 0\), the function \(f(x)\) defined by
\[f(x) = \frac{g(x)}{h(x)}\]
is continuous on \(\{x \mid x \in \mathbb{R}, h(x) \neq 0\}\). Such functions are called rational functions. If \(h(x) = 1\) then \(f(x) = g(x)\) so any polynomial functions are also rational functions (i.e., polynomial functions are a special case of rational functions).
We can ``algebraically'' define functions other than polynomial or rational functions. For example, \(f(x) = \sqrt{x}\) is not a rational function, but satisfies an algebraic equality
\[[f(x)]^2 - x = 0.\]
Definition (Algebraic function)
The continuous function \(f(x)\) is said to be an algebraic function if there exist polynomial functions \(g_0(x), g_1(x), \cdots, g_n(x)\) such that the following identity is satisfied:
Example. Rational functions are a special case of algebraic functions. □
Example. Let us prove that the function \(f(x) = \sqrt{x}\) defined on \(x \geq 0\) is continuous.
First, consider the case when \(a > 0\). For all \(\varepsilon > 0\), let us define \(\delta= \varepsilon\sqrt{a}\). Then, if \(x \geq 0\) and \(0 < |x - a| < \delta\),
Next, consider the case when \(a = 0\). For any \(\varepsilon > 0\), take \(\delta = \varepsilon^2 > 0\). If \(0 < x < \delta\), then \(|\sqrt{x} - 0| = \sqrt{x} < \sqrt{\delta} = \varepsilon\). Thus, \(\lim_{x\to +0}\sqrt{x} = 0\). Therefore, \(f(x) = \sqrt{x}\) is continuous at all \(x\geq 0\). □
Exponential functions
Definition (Exponential function)
Let \(a > 0\) be a real number. The function defined by
\[f(x) = a^x\]
is called an exponential function with base \(a\). In particular, when we simply say the exponential function, the base is \(e\) (Napier's constant).
Exponential functions are continuous everywhere on \(\mathbb{R}\). If \(a > 1\), then \(a^x\) is a strictly increasing function. If \(0 < a < 1\), then \(a^x\) is a strictly decreasing function. If \(a = 1\), then \(a^x = 1\) for all \(x\in\mathbb{R}\).
But what do we mean by \(a^x\), exactly? Review how we introduced \(e^x\) through \(\exp(x)\).
First, we provide the following theorem without proof.
Theorem
Let \(f(x)\) be a continuous and strictly monotone increasing function defined on an interval \(I\). Then \(f(x)\) has the inverse \(f^{-1}(x)\) which is also continuous and strictly monotone increasing.
Let \(f(x)\) be a continuous and strictly monotone decreasing function defined on an interval \(I\). Then \(f(x)\) has the inverse \(f^{-1}(x)\) which is also continuous and strictly monotone decreasing.
Remark. If \(f(x)\) is a strictly monotone function (either increasing or decreasing) on an interval \(I\), then the function is a bijection from \(I\) to \(f(I)\). For every bijection, there exists an inverse map that is also bijective. □
Using this theorem, we can see that for each exponential function \(f(x) = a^x\), there is its inverse function \(f^{-1}(x)\) which we define as the logarithmic function with base \(a\) denoted \(f^{-1}(x) = \log_a(x)\). When the base is \(e\) (Napier's constant), \(\log_{e}(x)\), this is the logarithm we have defined earlier and we often omit the base to write simply \(\log (x)\) or \(\ln(x)\).
Let \( t = e^x - 1\). Then \(x = \log(1 + t)\) As \(x \to 0\), \(t \to 0\) so \[\lim_{x\to 0}\frac{e^x - 1}{x} = \lim_{t\to 0}\frac{t}{\log(1+t)} = 1\] using the result of Part 1.
■
Trigonometric functions
We already know \(\sin\) and \(\cos\). The tangent function is defined as
\[\tan x = \frac{\sin x}{\cos x}, ~ x \in \mathbb{R}\setminus\left\{\left(n + \frac{1}{2}\right)\pi\mid n\in \mathbb{Z}\right\}.\]
Quiz. Why do we exclude the points \(x = \left(n + \frac{1}{2}\right)\pi, n \in \mathbb{Z}\), from the domain of the tangent function? □
\(\sin\) and \(\cos\) have the fundamental period of \(2\pi\) whereas \(\tan\) has the fundamental period of \(\pi\). That is, for \(n\in\mathbb{Z}\),
Since \(\sin\), \(\cos\), and \(\tan\) are periodic functions (and hence not monotone), they don't have inverse functions. Nevertheless, by restricting their domains, we may define the inverse functions.
\(\sin x\) is strictly monotone increasing on the closed interval \(\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]\). Therefore it has an inverse function on this domain, which we define as \(\arcsin x\). In other words, we consider \(\sin x\) as a function
Similarly, we restrict the domain of \(\cos x\) to \([0, \pi]\) to define its inverse, which we call \(\arccos x\):
\[\cos: [0, \pi] \to [-1, 1]\]
and
\[\arccos: [-1, 1] \to [0, \pi].\]
We restrict the domain of \(\tan x\) to the open interval \(\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)\) to define its inverse, which we call \(\arctan x\):
Note that these definitions of inverse trigonometric functions are not unique. We could restrict the domains of the trigonometric functions differently. For example, we could restrict the domain of \(\tan\) as
Example. Let us find the value of \(\arcsin\left(\sin\frac{5\pi}{6}\right)\). First note that \(\sin\frac{5\pi}{6} = \sin\frac{\pi}{6}\) (why?) and \(\frac{\pi}{6} \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]\). Therefore \(\arcsin\left(\sin\frac{5\pi}{6}\right) = \frac{\pi}{6}\). □
Hyperbolic functions
The hyperbolic cosine, hyperbolic sine, and hyperbolictangent are defined, respectively, by
\[
\begin{eqnarray}
\cosh x &=& \frac{e^x + e^{-x}}{2},\\
\sinh x &=& \frac{e^x - e^{-x}}{2},\\
\tanh x &=& \frac{\sinh x}{\cosh x} = \frac{e^x - e^{-x}}{e^x + e^{-x}}.
\end{eqnarray}
\]
The domain of these functions is \(\mathbb{R}\).
Why are their names similar to trigonometric functions? Note that
\[\cosh^2\theta - \sinh^2\theta = 1.\]
Recall that the equation \(x^2 - y^2 = 1\) represents the unit hyperbola on \(\mathbb{R}^2\).
and \(x^2 + y^2 = 1\) represents the unit circle on \(\mathbb{R}^2\).
Therefore, on \(\mathbb{R}^2\), while \((\cos\theta, \sin\theta)\) corresponds to a point on the unit circle centered at the origin, \((\cosh\theta, \sinh\theta)\) corresponds to a point on the unit hyperbola.
Also, compare the following relations with the definitions of the hyperbolic functions:
\[\begin{eqnarray}
\cos x &=& \frac{e^{ix} + e^{-ix}}{2},\\
\sin x &=& \frac{e^{ix} - e^{-ix}}{2i},\\
\tan x &=& \frac{\sin x}{\cos x} = \frac{e^{ix} - e^{-ix}}{i(e^{ix} + e^{-ix})}.
\end{eqnarray}\]
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Joseph Fourier introduced the Fourier series to solve the heat equation in the 1810s. In this post, we show how the Fourier transform arises naturally in a simplified version of the heat equation. Suppose we have the unit circle \(S\) made of a metal wire. Pick an arbitrary point \(A\) on the circle. Any point \(P\) on the circle is identified by the distance \(x\) from \(A\) to \(P\) along the circle in the counter-clockwise direction (i.e., \(x\) is the angle of the section between \(A\) and \(P\) in radian). Let \(u(t,x)\) represent the temperature at position \(x\) and time \(t\). The temperature distribution at \(t = 0\) is given by \(u(0, x) = f(x)\). Assuming no radiation of heat out of the metal wire, \(u(t,x)\) for \(t > 0\) and \(0\leq x \leq 2\pi\) is determined by the following partial differential equation (PDE) called the heat equation : \[\gamma\frac{\partial u}{\partial t} = \kappa\frac{\partial^2 u}{\partial x^2}\] and the initial condition \[u(0,x) = f(x...
Given a sequence \(\{a_n\}\), the expression \[\sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + \cdots\] is called a series (or infinite series ). This expression may or may not have value. At this point, it is purely formal. Note that the order of addition matters : We first add \(a_0\) and \(a_1\), to the result of which we add \(a_2\), to the result of which we add \(a_3\), and so on (Not something like we first add \(a_{101}\) and \(a_{58}\), then add \(a_{333051}\), and so on). We will see, however, that for a special class of series (the positive term series), the order of addition does not matter if the series converges. Example . The sum of a geometric progression \(\{ar^n\}\), that is, \(\sum_{n=0}^{\infty}ar^n\) is called a geometric series . It is understood that \(r^0 = 1\) including the case when \(r = 0\). □ Given a series \(\sum_{n=0}^{\infty}a_n\) and a number \(n\geq 0\), the sum \[\sum_{k=0}^{n}a_k = a_0 + a_1 + \cdots + a_n\] is called the \(n\)-th partial sum . We m...
Comments
Post a Comment