Introductory university-level calculus, linear algebra, abstract algebra, probability, statistics, and stochastic processes.
Limit of a univariate function
Get link
Facebook
X
Pinterest
Email
Other Apps
-
Let \(f(x)\) be a function. Suppose we move \(x\in\mathbb{R}\) towards \(a\) while keeping \(x \neq a\). If in this case \(f(x)\) approaches a constant value \(\alpha\) irrespective of the way how \(x\) approaches \(a\), we say that \(f(x)\) converges to \(\alpha\) as \(x \to a\) and write \[\lim_{x\to a}f(x) = \alpha\] or \[f(x) \to \alpha \text{ as \(x \to a\)}.\]
Remark. \(a\) needs not belong to \(\text{dom}(f)\) (the domain of \(f\)) as long as \(x\) can approach \(a\) arbitrarily closely. □
But what does this mean exactly? Here's a rigorous definition in terms of what is called the \(\varepsilon-\delta\) argument.
Definition (Limit of a function)
We say that the function \(f(x)\) converges to \(\alpha\) as \(x \to a\) and write
\[\lim_{x\to a}f(x) = \alpha\]
if the following condition is satisfied.
For any \(\varepsilon > 0\), there exists \(\delta > 0\) such that, for all \(x\in \text{dom}(f)\), if \(0 < |x - a| < \delta\) then \(|f(x) - \alpha| < \varepsilon\).
Remark. We are implicitly assuming \(\varepsilon, \delta \in\mathbb{R}\). □
Here's how this definition works. Suppose \(f(x) \to \alpha\) as \(x \to a\). Let us pick any positive real number \(\varepsilon\). However small this \(\varepsilon\) may be, if \(x\) is sufficiently close to \(a\), we can always have \(|f(x) - \alpha| < \varepsilon\). Here ``sufficiently close'' means that we can pick some sufficiently small positive real number \(\delta\) such that \(|x - a| < \delta\) implies \(|f(x) - \alpha| < \varepsilon\). In other words, we move \(x\) closer and closer to \(a\) until \(|f(x) - \alpha| < \varepsilon\) holds. Conversely, if this operation is possible for any \(\varepsilon\), it makes sense to say that \(f(x)\) converges to \(\alpha\) as \(x \to a\).
Example. Consider \(f(x) = x^2 + 1\). We have
\[\lim_{x \to 1}f(x) = 2.\]
Let \(\varepsilon = 0.1\). Let us find \(\delta > 0\) such that \(0 < |x - 1| < \delta\) implies \(|f(x) - 2| < \varepsilon\). Suppose
Since \(\sqrt{0.9} - 1 = -0.0513\cdots\) and \(\sqrt{1.1} - 1 = 0.0488\cdots\), if
\[|x - 1| < \sqrt{1.1} - 1,\]
then we have
\[|f(x) - 2| < 0.1 = \varepsilon.\]
So \(\delta\) can be any positive number less than \(\sqrt{1.1} - 1\). For example, let \(\delta = 0.04\). Then \(|x - 1| < 0.04\) implies \(0.96 < x < 1.04\), which implies \(-0.0784 < x^2 - 1 < 0.0816\) so
Defining the birth process Consider a colony of bacteria that never dies. We study the following process known as the birth process , also known as the Yule process . The colony starts with \(n_0\) cells at time \(t = 0\). Assume that the probability that any individual cell divides in the time interval \((t, t + \delta t)\) is proportional to \(\delta t\) for small \(\delta t\). Further assume that each cell division is independent of others. Let \(\lambda\) be the birth rate. The probability of a cell division for a population of \(n\) cells during \(\delta t\) is \(\lambda n \delta t\). We assume that the probability that two or more births take place in the time interval \(\delta t\) is \(o(\delta t)\). That is, it can be ignored. Consequently, the probability that no cell divides during \(\delta t\) is \(1 - \lambda n \delta t - o(\delta t)\). Note that this process is an example of the Markov chain with states \({n_0}, {n_0 + 1}, {n_0 + 2}...
Generational growth Consider the following scenario (see the figure below): A single individual (cell, organism, etc.) produces \(j (= 0, 1, 2, \cdots)\) descendants with probability \(p_j\), independently of other individuals. The probability of this reproduction, \(\{p_j\}\), is known. That individual produces no further descendants after the first (if any) reproduction. These descendants each produce further descendants at the next subsequent time with the same probabilities. This process carries on, creating successive generations. Figure 1. An example of the branching process. Let \(X_n\) be the random variable representing the population size (number of individuals) of generation \(n\). In the above figure, we have \(X_0 = 1\), \(X_1=4\), \(X_2 = 7\), \(X_3=12\), \(X_4 = 9.\) We shall assume \(X_0 = 1\) as the initial condition. Ideally, our goal would be to find how the population size grows through generations, that is, to find the probability \(\Pr(X_n = k)\) for e...
The birth-death process Combining birth and death processes with birth and death rates \(\lambda\) and \(\mu\), respectively, we expect to have the following differential-difference equations for the birth-death process : \[\begin{eqnarray}\frac{{d}p_0(t)}{{d}t} &=& \mu p_1(t),\\\frac{{d}p_n(t)}{{d}t} &=& \lambda(n-1)p_{n-1}(t) - (\lambda + \mu)np_n(t) + \mu(n+1)p_{n+1}(t),~~(n \geq 1).\end{eqnarray}\] You should derive the above equations based on the following assumptions: Given a population with \(n\) individuals, the probability that an individual is born in the population during a short period \(\delta t\) is \(\lambda n \delta t + o(\delta t)\). Given a population with \(n\) individuals, the probability that an individual dies in the population is \(\mu n \delta t + o(\delta t)\). The probability that multiple individuals are born or die during \(\delta t\) is negligible. (The probability of one birth and one death during \(\delta t\) is also negligible.) Consequ...
Comments
Post a Comment