where \(a_n, a_{n-1}, \cdots, a_0 \in\mathbb{R}\) is a continuous function on \(\mathbb{R}\). Such functions are called polynomial functions.
If \(g(x)\) and \(h(x)\) are polynomial functions such that \(h(x) \neq 0\), the function \(f(x)\) defined by
\[f(x) = \frac{g(x)}{h(x)}\]
is continuous on \(\{x \mid x \in \mathbb{R}, h(x) \neq 0\}\). Such functions are called rational functions. If \(h(x) = 1\) then \(f(x) = g(x)\) so any polynomial functions are also rational functions (i.e., polynomial functions are a special case of rational functions).
We can ``algebraically'' define functions other than polynomial or rational functions. For example, \(f(x) = \sqrt{x}\) is not a rational function, but satisfies an algebraic equality
\[[f(x)]^2 - x = 0.\]
Definition (Algebraic function)
The continuous function \(f(x)\) is said to be an algebraic function if there exist polynomial functions \(g_0(x), g_1(x), \cdots, g_n(x)\) such that the following identity is satisfied:
Example. Rational functions are a special case of algebraic functions. □
Example. Let us prove that the function \(f(x) = \sqrt{x}\) defined on \(x \geq 0\) is continuous.
First, consider the case when \(a > 0\). For all \(\varepsilon > 0\), let us define \(\delta= \varepsilon\sqrt{a}\). Then, if \(x \geq 0\) and \(0 < |x - a| < \delta\),
Next, consider the case when \(a = 0\). For any \(\varepsilon > 0\), take \(\delta = \varepsilon^2 > 0\). If \(0 < x < \delta\), then \(|\sqrt{x} - 0| = \sqrt{x} < \sqrt{\delta} = \varepsilon\). Thus, \(\lim_{x\to +0}\sqrt{x} = 0\). Therefore, \(f(x) = \sqrt{x}\) is continuous at all \(x\geq 0\). □
Exponential functions
Definition (Exponential function)
Let \(a > 0\) be a real number. The function defined by
\[f(x) = a^x\]
is called an exponential function with base \(a\). In particular, when we simply say the exponential function, the base is \(e\) (Napier's constant).
Exponential functions are continuous everywhere on \(\mathbb{R}\). If \(a > 1\), then \(a^x\) is a strictly increasing function. If \(0 < a < 1\), then \(a^x\) is a strictly decreasing function. If \(a = 1\), then \(a^x = 1\) for all \(x\in\mathbb{R}\).
But what do we mean by \(a^x\), exactly? Review how we introduced \(e^x\) through \(\exp(x)\).
First, we provide the following theorem without proof.
Theorem
Let \(f(x)\) be a continuous and strictly monotone increasing function defined on an interval \(I\). Then \(f(x)\) has the inverse \(f^{-1}(x)\) which is also continuous and strictly monotone increasing.
Let \(f(x)\) be a continuous and strictly monotone decreasing function defined on an interval \(I\). Then \(f(x)\) has the inverse \(f^{-1}(x)\) which is also continuous and strictly monotone decreasing.
Remark. If \(f(x)\) is a strictly monotone function (either increasing or decreasing) on an interval \(I\), then the function is a bijection from \(I\) to \(f(I)\). For every bijection, there exists an inverse map that is also bijective. □
Using this theorem, we can see that for each exponential function \(f(x) = a^x\), there is its inverse function \(f^{-1}(x)\) which we define as the logarithmic function with base \(a\) denoted \(f^{-1}(x) = \log_a(x)\). When the base is \(e\) (Napier's constant), \(\log_{e}(x)\), this is the logarithm we have defined earlier and we often omit the base to write simply \(\log (x)\) or \(\ln(x)\).
Let \( t = e^x - 1\). Then \(x = \log(1 + t)\) As \(x \to 0\), \(t \to 0\) so \[\lim_{x\to 0}\frac{e^x - 1}{x} = \lim_{t\to 0}\frac{t}{\log(1+t)} = 1\] using the result of Part 1.
■
Trigonometric functions
We already know \(\sin\) and \(\cos\). The tangent function is defined as
\[\tan x = \frac{\sin x}{\cos x}, ~ x \in \mathbb{R}\setminus\left\{\left(n + \frac{1}{2}\right)\pi\mid n\in \mathbb{Z}\right\}.\]
Quiz. Why do we exclude the points \(x = \left(n + \frac{1}{2}\right)\pi, n \in \mathbb{Z}\), from the domain of the tangent function? □
\(\sin\) and \(\cos\) have the fundamental period of \(2\pi\) whereas \(\tan\) has the fundamental period of \(\pi\). That is, for \(n\in\mathbb{Z}\),
Since \(\sin\), \(\cos\), and \(\tan\) are periodic functions (and hence not monotone), they don't have inverse functions. Nevertheless, by restricting their domains, we may define the inverse functions.
\(\sin x\) is strictly monotone increasing on the closed interval \(\left[-\frac{\pi}{2}, \frac{\pi}{2}\right]\). Therefore it has an inverse function on this domain, which we define as \(\arcsin x\). In other words, we consider \(\sin x\) as a function
Similarly, we restrict the domain of \(\cos x\) to \([0, \pi]\) to define its inverse, which we call \(\arccos x\):
\[\cos: [0, \pi] \to [-1, 1]\]
and
\[\arccos: [-1, 1] \to [0, \pi].\]
We restrict the domain of \(\tan x\) to the open interval \(\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)\) to define its inverse, which we call \(\arctan x\):
Note that these definitions of inverse trigonometric functions are not unique. We could restrict the domains of the trigonometric functions differently. For example, we could restrict the domain of \(\tan\) as
Example. Let us find the value of \(\arcsin\left(\sin\frac{5\pi}{6}\right)\). First note that \(\sin\frac{5\pi}{6} = \sin\frac{\pi}{6}\) (why?) and \(\frac{\pi}{6} \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]\). Therefore \(\arcsin\left(\sin\frac{5\pi}{6}\right) = \frac{\pi}{6}\). □
Hyperbolic functions
The hyperbolic cosine, hyperbolic sine, and hyperbolictangent are defined, respectively, by
\[
\begin{eqnarray}
\cosh x &=& \frac{e^x + e^{-x}}{2},\\
\sinh x &=& \frac{e^x - e^{-x}}{2},\\
\tanh x &=& \frac{\sinh x}{\cosh x} = \frac{e^x - e^{-x}}{e^x + e^{-x}}.
\end{eqnarray}
\]
The domain of these functions is \(\mathbb{R}\).
Why are their names similar to trigonometric functions? Note that
\[\cosh^2\theta - \sinh^2\theta = 1.\]
Recall that the equation \(x^2 - y^2 = 1\) represents the unit hyperbola on \(\mathbb{R}^2\).
and \(x^2 + y^2 = 1\) represents the unit circle on \(\mathbb{R}^2\).
Therefore, on \(\mathbb{R}^2\), while \((\cos\theta, \sin\theta)\) corresponds to a point on the unit circle centered at the origin, \((\cosh\theta, \sinh\theta)\) corresponds to a point on the unit hyperbola.
Also, compare the following relations with the definitions of the hyperbolic functions:
\[\begin{eqnarray}
\cos x &=& \frac{e^{ix} + e^{-ix}}{2},\\
\sin x &=& \frac{e^{ix} - e^{-ix}}{2i},\\
\tan x &=& \frac{\sin x}{\cos x} = \frac{e^{ix} - e^{-ix}}{i(e^{ix} + e^{-ix})}.
\end{eqnarray}\]
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Generational growth Consider the following scenario (see the figure below): A single individual (cell, organism, etc.) produces \(j (= 0, 1, 2, \cdots)\) descendants with probability \(p_j\), independently of other individuals. The probability of this reproduction, \(\{p_j\}\), is known. That individual produces no further descendants after the first (if any) reproduction. These descendants each produce further descendants at the next subsequent time with the same probabilities. This process carries on, creating successive generations. Figure 1. An example of the branching process. Let \(X_n\) be the random variable representing the population size (number of individuals) of generation \(n\). In the above figure, we have \(X_0 = 1\), \(X_1=4\), \(X_2 = 7\), \(X_3=12\), \(X_4 = 9.\) We shall assume \(X_0 = 1\) as the initial condition. Ideally, our goal would be to find how the population size grows through generations, that is, to find the probability \(\Pr(X_n = k)\) for e...
The birth-death process Combining birth and death processes with birth and death rates \(\lambda\) and \(\mu\), respectively, we expect to have the following differential-difference equations for the birth-death process : \[\begin{eqnarray}\frac{{d}p_0(t)}{{d}t} &=& \mu p_1(t),\\\frac{{d}p_n(t)}{{d}t} &=& \lambda(n-1)p_{n-1}(t) - (\lambda + \mu)np_n(t) + \mu(n+1)p_{n+1}(t),~~(n \geq 1).\end{eqnarray}\] You should derive the above equations based on the following assumptions: Given a population with \(n\) individuals, the probability that an individual is born in the population during a short period \(\delta t\) is \(\lambda n \delta t + o(\delta t)\). Given a population with \(n\) individuals, the probability that an individual dies in the population is \(\mu n \delta t + o(\delta t)\). The probability that multiple individuals are born or die during \(\delta t\) is negligible. (The probability of one birth and one death during \(\delta t\) is also negligible.) Consequ...
Comments
Post a Comment